Safeguarding your organisation when adopting AI

read time: 4 mins
17.04.24

A Bloomberg Intelligence report published at the end of last year has predicted that the recent growth of the AI market is set to explode, with a market valuation of circa $40 billion in 2022, increasing to $1.3 trillion by 2032. 

Bloomberg Intelligence believe this will largely be driven by the increase in the volume of generative AI programs, currently including Open AI's Chat GPT and Google's Bard. At present it is estimated that AI comprises just 1% of all IT hardware, software services and gaming market spending, however, this is anticipated to increase to 10% by the beginning of the next decade. 

Organisations of all shapes and sizes will need to consider how best they can protect themselves as they begin to make use of the increasing number of AI solutions coming to market. This article provides advice for organisations and how they can support themselves when adopting AI.

How will AI affect litigation?

Advances in AI are likely to create huge opportunities for early adopters. Equally, we anticipate the increasing availability and uptake of AI will lead to a rapid increase in litigation as companies and consumers alike, seek redress for the harms they have suffered in connection with this technology. Developers, suppliers and users of AI systems will need to keep a close eye on the development of the law in this area, as at present it remains far from certain how liability will be allocated between these parties when damage occurs. 

Risk allocation and liability

English contract law places relatively few restrictions on parties seeking to outline the terms of an agreement, enabling them to freely outline the commercial basis of their relationship and allocate risk as they feel appropriate. Whilst protections do exist, often to redress the imbalance of power between the parties, and most notably to protect consumers in B2C relationships when considering a dispute, the courts generally seek to give effect to the intentions of the parties. The courts are slow to intervene if one party has struck a poor bargain or has failed to negotiate terms which adequately protect its position.  

Developers and suppliers of AI systems typically include robust exclusions of liability within their terms. Depending on the bargaining position of the parties, many will simply offer their standard terms and conditions on a take it or leave it basis, as is the case with many providers of enterprise software or cloud computing systems. However, to what extent these exclusions of liability will be effective, in relation to the supply of AI systems, is yet to be tested by the courts. There is therefore great uncertainty for all parties within the supply chain, from developer to ultimate consumer/user, as to how the courts will determine questions of liability. 

It will be fascinating to see how the courts develop the law in this area, likely seeking to safeguard the interests of businesses and consumers using this technology whilst ensuring their approach does not unreasonably impede the growth of the sector. This is the approach the government itself has alluded to adopting, in its whitepaper ‘A pro-innovation approach to AI regulation’ as it considers the allocation of liability in connection with AI technologies and the scope and focus of any future legislation.

How to support your organisation when adopting AI

We believe there are several things you can be doing already to support your organisation's ongoing adoption of AI:

  1. Undertake a careful risk assessment surrounding your use of AI, including a consideration of the potential harm it could cause to third parties. Carefully document your rationale for using AI technologies and keep a log updated with any points of significance. If any issues arise carefully consider if you can justify the continued use of the application in question.
  2. Put in place an internal policy for the use of AI and make it clear to employees and contractors that any AI technologies must only be used in accordance with its terms.
  3. Businesses leveraging AI to provide services to their clients will also need to consider how the use of this technology might interact with professional negligence claims. Please see this article for further detail on this topic.
  4. As with any supplier, carefully review an AI developer's/supplier's terms and conditions and where possible, seek to remove any onerous exclusions or caps on their liability.
  5. Keep abreast of future updates from the government following the circulation of the aforementioned AI whitepaper and recent consultation outcome.  
  6. Those looking to use AI technologies within the European Union (EU) or deliver services supported by AI to individuals or organisations within this jurisdiction should also pay close attention to the progress of the EU's Artificial Intelligence Act, anticipated to come into force in 2026.
  7. Assess whether your existing insurance policies adequately safeguard against the risks posed by AI.

For more information, please contact Hugh Houlston.

Sign up for legal insights

We produce a range of insights and publications to help keep our clients up-to-date with legal and sector developments.  

Sign up