Navigating AI in the healthcare sector: Jocelyn Ormond presents at Royal Society of Medicine webinar

read time: 9 mins
24.06.24

Jocelyn Ormond, corporate partner and head of the healthcare & life sciences sector at Ashfords, was invited to speak at a Royal Society of Medicine all day webinar in April 2024. The webinar, which was chaired by Dr Michelle Tempest (a member of the RSM’s Digital Health Council and a partner at Candesic), explored patient perspectives and clinical realities regarding the use of AI in healthcare. 

Presenting as the penultimate speaker, Jocelyn provided a ‘helicopter’ view of certain key legal risks that the audience should bear in mind when developing or using AI in their practice (over and above regulatory requirements in relation to medical devices generally). 

This article summarises some of the observations that Jocelyn shared during the webinar, highlighting how the UK healthcare sector uses AI, the different regulatory approaches in place in the UK and the European Union, some particular challenges presented by intellectual property and data protection law, and what we can expect to see in the future. 

How is the UK healthcare sector using AI?

The UK Conservative government has shown its commitment to invest in AI in the sector in various ways, including by allocating £100m to support the use of AI in life sciences and healthcare, as announced in October 2023. 

The scope and potential to use AI in healthcare is almost limitless. The use of AI in healthcare has been shown by numerous case studies to have improved the patient experience – from the design of treatment plans to providing a diagnosis. By using AI, the sector has been able to accelerate clinical trials and support innovation to create new drugs and medical devices. 

As illustrated by other speakers at the webinar, AI can also benefit clinicians by cutting down their administrative burden, improving response times and accuracy in triaging or emergencies, increasing the speed and efficiency of treatment, and in many other ways. AI-driven solutions are helping to improve the doctor, caregiver and patient experience and patient wellbeing, as well as helping to tackle the rising costs of healthcare and the shortage of staff.

Adopting AI has also been shown significantly to improve clinical governance by accelerating the decision making process with better consistency and quality.

What are the main differences in the UK’s and the EU’s regulatory approaches?

There are a number of legal, regulatory and ethical considerations when AI is used within this sector. 

Even before focusing on the enhanced regulation of AI in the UK and EU, it should be noted that the AI solution used will require an underlying classification of its software as a medical device under the EU Medical Device Regulation (with corresponding UK regulations, although the EU and UK classification regimes diverged in 2021).

Due to the universal, cross-border nature of AI and healthcare, it is prudent for anyone developing or using AI in either a UK or EU context to look at both the UK’s and the EU’s current approach to regulating AI.

The UK's vertical approach

The EU's horizontal approach

The UK white paper, a pro-innovation approach to AI regulation, adopted a principles-based framework, encouraging innovation but monitoring risks based on:

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

The white paper proposed that AI risks should be approached through existing regulatory  frameworks. However it notes that there isn’t a single new AI regulatory body or comprehensive set of new laws. This may lead to uncertainty about adherence and the consequences when the regulations aren’t followed.

The government has also published Implementing the UK’s AI Regulatory Principles to provide guidance to regulators when they look to implement or apply the principles to regulate AI in their remits.

Safety is high on the UK’s agenda when it comes to AI. The UK has established the state-backed AI Safety Institute (ASI), which will look at AI systems with high potential for harm. 

The UK plays a key role in international cooperation in regulating AI. It hosted the first AI Safety Summit in November 2023, with all countries and industry leaders agreeing on the Bletchley Declaration, to ensure AI is designed, developed and used safely.

The UK is a party to the G7 Hiroshima International Guiding Principles for Organizations Developing Advanced AI Systems, which aims to promote safe, secure and trustworthy AI worldwide. In addition to this, a Memorandum of Understanding between the UK and US was agreed in April 2024, which is committed to testing and evaluating the safety of AI tools and systems.

The EU Artificial Intelligence Act, which has been approved by the EU council but is awaiting publication in the Official Journal of the European Union (OJEU) to take effect, applies to AI systems placed on market, put into service or used in the EU irrespective of provider location.

The Act creates a new layer of requirements on top of many existing technology-neutral obligations in the EU. Although it is not directly applicable in the UK, the Act is expected to have a ‘soft’ influence on the UK market.

The Act presents a risk-based classification of AI systems:

  • Prohibited AI: banned with limited exceptions.
  • High risk AI, including health tech solutions: permitted subject to rigorous testing, proper documentation of data quality and an accountability framework that details human oversight.
  • Limited risk: permitted subject to light transparency obligations.
  • Minimal or low risk: will not be regulated under the EU AI Act.

General purpose AI is subject to specific compliance requirements, including:

  • Documentation for authorities/downstream providers
  • Model evaluation
  • Risk assessments/management
  • Cybersecurity
  • Serious incidents

There are heavy fines for violations, the top fine being the higher of €35 million or 7% of global annual turnover for certain violations. The fine levels for small and medium-sized enterprises are lower.

Read our article to discover what effect the Act will have on the healthcare, digital health and life sciences sector.

Intellectual property law, data protection compliance and other key legal risks in AI in healthcare

There are a number of legal risks that the UK healthcare sector could face when using AI. 

Duty of care: Care providers owe a duty of care regardless of the systems used to provide their care. However the use of AI may redefine this standard of care, for example inappropriate use of AI tools may be considered negligent. On the other hand, there is a possibility that in the future failure to use AI tools to deliver better and more accurate care could be a breach of the duty of care.

Data protection: AI systems require extensive use of data including personal data. Such use is subject to data protection legislation, including UK data protection laws, as well as the EU GDPR where controllers or processors have an EU establishment or are offering services to EU data subjects.

Ensuring data protection compliance within the supply chain can be challenging as an AI tool may depend on components accessed, managed and controlled by many parties in its complex ecosystem. Enforcing data subject rights might also be difficult: for example if personal data is entered into an AI system using machine learning with an artificial neural network, it would be impossible to isolate such a data snippet to modify, delete or extract it. 

Intellectual property: The use of non-personal data such as training data or input data for AI systems could carry intellectual property infringement risks. Such data may contain proprietary information, know-how and trade secrets that shouldn’t be used without guardrails. It is also important to note that even publicly-available information may have IP rights attached. There are very limited Copyright, Designs and Patents Act 1988 exemptions but they are unlikely to apply to commercialised AI. 

In respect of AI outputs, it is important to specify who owns the deliverables, derivative works and data and to define the licence terms of AI outputs, such as the extent of outputs a supplier may retain to train or improve its AI system. It should also be noted that in the UK, only natural persons can be inventors on a patent, but the position has become unclear when an artificial neural network is used.

Looking ahead

Under the UK’s principle-based approach, the existing legal frameworks and regulations continue to apply to AI technology. That said, regulators have provided further guidance to provide market participants with more clarity when dealing with AI. 

For example, in June 2023, the Medicines and Healthcare Products Regulatory Agency updated its ongoing Software and AI as a Medical Device Change Programme with various work packages to develop the regulatory framework in the healthcare sector, to protect patients and public whilst accelerating responsible innovation. 

The Medicines and Healthcare Products Regulatory Agency, the Care Quality Commission, the NHS Health Research Authority and the National Institute for Health and Care Excellence have collaborated on an AI and Digital Regulations Service. This scheme provides the health and social care sector with advice on what regulations to follow and how to evaluate an AI system’s effectiveness. 

There have been calls for a change in the UK’s current approach to AI regulation. Lord Holmes of Richmond proposed a private members’ bill, the Artificial Intelligence (Regulation) Bill which would introduce a new AI authority, and the Sunak government tabled the Data Protection and Digital Information Bill which would overhaul the current data protection regime and as a result would impact information standards for health and social care. However, due to the upcoming general election on 4 July 2024, these bills have been brought to a halt and it is unclear whether these proposals (or proposals like them) will be reintroduced after the general election. 

We would expect the law to continue to evolve to deal with technology advancement, although almost certainly at a slower pace than the technology itself. We would expect to see more guidance from lawmakers as well as case law from the courts considering novel issues arising from technology advancement, which will set new legal precedents. 

In the meantime, regulatory bodies have been active in establishing the regulatory framework for AI adoption in the sector as they can move quicker than lawmakers. That said, considering the nature of the sector, they will proceed extremely cautiously when regulating AI, especially to balance public and patient safety with innovation.

For more information, please contact Jocelyn Ormond or Brett Lambe, senior associate in our commercial team.

Sign up for legal insights

We produce a range of insights and publications to help keep our clients up-to-date with legal and sector developments.  

Sign up