International: Risks of AI for the healthcare sector

read time: 7 mins
16.10.24

Artificial intelligence (AI) is not new, but we have seen a huge growth in use cases in recent years and it is now changing the way that many sectors operate. This article explores the impact of AI on the healthcare sector and the key risks of using AI within healthcare.

How is the healthcare sector utilising AI?

Medical professionals are already using AI in a large number of ways and the future potential for AI in the medical sphere is staggering. AI is utilised to analyse and interpret x-rays and scans, automate certain parts of surgical procedures, improve diagnostics and create personalised treatment plans.

Predictive AI can also be used to analyse health data and make predictions about future health issues, enabling people to take action to prevent health issues from arising in the first place.

Global health services are coming under increased pressure with growing demand and costs, and are looking to AI technologies to improve efficiency and eventually reduce expenditure.

The European Commission has expressed a long-term goal of achieving the effective implementation of AI in the healthcare sector.

Promoting innovation within healthcare is also a key agenda item for the newly elected Labour government in the United Kingdom (UK). In particular the Labour government has committed to ‘harness the power of technologies like AI to transform the speed and accuracy of diagnostic services’ to improve the UK’s National Health Service.

What are the key risks for the use of AI in the healthcare sector?

Although AI presents huge opportunities for the healthcare sector, it also poses significant risks. We are seeing varying approaches to the regulation of AI technologies to address these risks, but it is helpful to first explore what these risks look like.

  • Bias and discrimination: If the dataset that an AI system is trained on is biased, then the consequence of this will be a biased AI system. A lot of the medical data that is available for training AI models is biased due to the under-representation of particular groups within clinical trials and medical studies, as well as within communities. Additionally there is disparity in access to AI technologies, which then perpetuates the issue. There are real concerns that future AI solutions could amplify the current imbalances and biases that result in healthcare inequality.
  • Inaccuracy and errors: Where there are errors with an AI system it is easy to see how this can cause substantial harm to individuals, for example incorrect or missed diagnoses can result in patients failing to receive necessary treatment. Additionally AI systems, like any technology, can be subject to faults which can then lead to key infrastructure being unavailable.
  • Misuse: In the same way that an error with an AI system can result in harm to individuals, so can misuse of an AI system. We are reliant on healthcare professionals to implement and utilise innovative AI technologies correctly and efficiently, but this depends on them receiving proper training. Additionally where AI systems are made available for use by people that are not healthcare professionals, for example publicly available health tech apps, users are left interpretating results with limited information about how they were generated and without professional medical advice.
  • Privacy: Data protection concerns are central to many AI discussions, however this is especially the case for AI systems within the healthcare sector due to the vast volumes of sensitive health data that they will process. Under many global privacy laws organisations are required to treat healthcare data with a greater degree of care due to the increased risk of harm to individuals if their health data is unlawfully processed or the subject of a data breach. The additional processing requirements for special category data under both European Union (EU) and UK privacy laws are a key example of this. There are a number of challenges with using AI technology in a way that complies with applicable privacy laws. This can include difficulties: (i) establishing a lawful basis for processing (and where necessary a special category processing condition); (ii) providing data subjects with transparent processing information so that they understand how the AI system is processing their personal data; and (iii) enabling data subjects to exercise their legal rights, for example it may be impossible to isolate and rectify, delete or extract personal data that has been entered into an AI system.
  • Security: Security goes hand-in-hand with privacy. Ensuring the security of AI systems is crucial for protecting the data processed by the system and also to comply with applicable privacy laws. If an AI system is subject to a cyber-attack, not only is there a risk to the privacy rights of individuals, but there is also a risk of system unavailability which could obstruct access to critical medical care.

AI regulation

AI regulation is an important means of addressing and mitigating these risks.

The EU published the EU Artificial Intelligence Act (the EU AI Act) in the Official Journal of the European Union on 12 July 2024. The EU AI Act entered into force on 1 August 2024. Some provisions will become applicable within the next six to twelve months, but most will apply two years after its entry into force. The EU AI Act classifies AI systems by risk level. It starts with minimal or low risk systems which are not regulated by the EU AI Act and limited risk systems which are subject to light transparency obligations. It then moves onto high risk AI systems which are permitted subject to specific requirements including rigorous testing, transparency, data quality and an accountability framework that details human oversight, and finally prohibited AI systems which are banned with limited exceptions. Many AI health tech solutions will be categorised as high risk AI systems.

In contrast, the UK’s Artificial Intelligence (Regulation) Bill (the AI Regulation Bill), which would introduce a new AI authority in the UK, has been put on hold as a result of the recent UK general election. Additionally it was not included in the King’s Speech on 17 July 2024. This is a speech written by the UK’s new Labour government and delivered by the King to set out the government’s upcoming legislative agenda. It is currently unclear whether the AI Regulation Bill or any similar proposals will be reintroduced in the future. For now though, the UK's principles-based approach to regulating AI remains unchanged.

Under this principles-based approach the UK government empowers existing regulators, including the Medicines and Healthcare Products Regulatory Agency (MHRA), the Information Commissioner’s Office (ICO) and the Health and Safety Executive (HSE), to regulate the use of AI within their industries and sectors in line with key principles. These principles are: (i) safety, security and robustness; (ii) transparency and explainability; (iii) fairness; (iv) accountability and governance; and (v) contestability and redress.

Mitigating the risks of AI in the healthcare sector

Regulation, whether by new legislation or by empowering existing regulators, is an important way to address the risks that are inherent in AI technology. As well as complying with legal frameworks and applicable codes of practice, it is important that organisations developing or deploying AI technologies within the healthcare sector carefully consider the relevant risks and take responsibility for driving forward innovation in the sector in a sustainable, fair and accountable way.

This requires organisations to prioritise impact assessments and invest time into proper due diligence and testing, as well as procuring balanced and representative datasets for their AI models.

Additionally human intervention will continue to be important to avoid AI errors, with medical professionals continuing to play a key role in the delivery of medical care and advice, assisted by AI. Medical professionals also require proper training on how to utilise AI, both within early stage medical education and on an ongoing basis throughout their careers.

Concluding comments

AI has the potential to improve global health and address some of the key challenges that the healthcare sector faces, for example insufficient funding, personnel shortages and growing demand (partly due to an ageing global population). Although there are significant risks associated with the use of AI in the healthcare sector, there are huge opportunities which mean that it is important that we find meaningful ways to address the risks.

For more information, please contact Hannah Pettit.

Sign up for legal insights

We produce a range of insights and publications to help keep our clients up-to-date with legal and sector developments.  

Sign up