A Bloomberg Intelligence report published at the end of last year has predicted that the recent growth of the AI market is set to explode, with a market valuation of circa $40 billion in 2022, increasing to $1.3 trillion by 2032.
Bloomberg Intelligence believe this will largely be driven by the increase in the volume of generative AI programs, currently including Open AI's Chat GPT and Google's Bard. At present it is estimated that AI comprises just 1% of all IT hardware, software services and gaming market spending, however, this is anticipated to increase to 10% by the beginning of the next decade.
Organisations of all shapes and sizes will need to consider how best they can protect themselves as they begin to make use of the increasing number of AI solutions coming to market. This article provides advice for organisations and how they can support themselves when adopting AI.
Advances in AI are likely to create huge opportunities for early adopters. Equally, we anticipate the increasing availability and uptake of AI will lead to a rapid increase in litigation as companies and consumers alike, seek redress for the harms they have suffered in connection with this technology. Developers, suppliers and users of AI systems will need to keep a close eye on the development of the law in this area, as at present it remains far from certain how liability will be allocated between these parties when damage occurs.
English contract law places relatively few restrictions on parties seeking to outline the terms of an agreement, enabling them to freely outline the commercial basis of their relationship and allocate risk as they feel appropriate. Whilst protections do exist, often to redress the imbalance of power between the parties, and most notably to protect consumers in B2C relationships when considering a dispute, the courts generally seek to give effect to the intentions of the parties. The courts are slow to intervene if one party has struck a poor bargain or has failed to negotiate terms which adequately protect its position.
Developers and suppliers of AI systems typically include robust exclusions of liability within their terms. Depending on the bargaining position of the parties, many will simply offer their standard terms and conditions on a take it or leave it basis, as is the case with many providers of enterprise software or cloud computing systems. However, to what extent these exclusions of liability will be effective, in relation to the supply of AI systems, is yet to be tested by the courts. There is therefore great uncertainty for all parties within the supply chain, from developer to ultimate consumer/user, as to how the courts will determine questions of liability.
It will be fascinating to see how the courts develop the law in this area, likely seeking to safeguard the interests of businesses and consumers using this technology whilst ensuring their approach does not unreasonably impede the growth of the sector. This is the approach the government itself has alluded to adopting, in its whitepaper ‘A pro-innovation approach to AI regulation’ as it considers the allocation of liability in connection with AI technologies and the scope and focus of any future legislation.
We believe there are several things you can be doing already to support your organisation's ongoing adoption of AI:
For more information, please contact Hugh Houlston.