Artificial intelligence (AI) is not new, but we have seen a huge growth in use cases in recent years and it is now changing the way that many sectors operate. This article explores the impact of AI on the healthcare sector and the key risks of using AI within healthcare.
Medical professionals are already using AI in a large number of ways and the future potential for AI in the medical sphere is staggering. AI is utilised to analyse and interpret x-rays and scans, automate certain parts of surgical procedures, improve diagnostics and create personalised treatment plans.
Predictive AI can also be used to analyse health data and make predictions about future health issues, enabling people to take action to prevent health issues from arising in the first place.
Global health services are coming under increased pressure with growing demand and costs, and are looking to AI technologies to improve efficiency and eventually reduce expenditure.
The European Commission has expressed a long-term goal of achieving the effective implementation of AI in the healthcare sector.
Promoting innovation within healthcare is also a key agenda item for the newly elected Labour government in the United Kingdom (UK). In particular the Labour government has committed to ‘harness the power of technologies like AI to transform the speed and accuracy of diagnostic services’ to improve the UK’s National Health Service.
Although AI presents huge opportunities for the healthcare sector, it also poses significant risks. We are seeing varying approaches to the regulation of AI technologies to address these risks, but it is helpful to first explore what these risks look like.
AI regulation is an important means of addressing and mitigating these risks.
The EU published the EU Artificial Intelligence Act (the EU AI Act) in the Official Journal of the European Union on 12 July 2024. The EU AI Act entered into force on 1 August 2024. Some provisions will become applicable within the next six to twelve months, but most will apply two years after its entry into force. The EU AI Act classifies AI systems by risk level. It starts with minimal or low risk systems which are not regulated by the EU AI Act and limited risk systems which are subject to light transparency obligations. It then moves onto high risk AI systems which are permitted subject to specific requirements including rigorous testing, transparency, data quality and an accountability framework that details human oversight, and finally prohibited AI systems which are banned with limited exceptions. Many AI health tech solutions will be categorised as high risk AI systems.
In contrast, the UK’s Artificial Intelligence (Regulation) Bill (the AI Regulation Bill), which would introduce a new AI authority in the UK, has been put on hold as a result of the recent UK general election. Additionally it was not included in the King’s Speech on 17 July 2024. This is a speech written by the UK’s new Labour government and delivered by the King to set out the government’s upcoming legislative agenda. It is currently unclear whether the AI Regulation Bill or any similar proposals will be reintroduced in the future. For now though, the UK's principles-based approach to regulating AI remains unchanged.
Under this principles-based approach the UK government empowers existing regulators, including the Medicines and Healthcare Products Regulatory Agency (MHRA), the Information Commissioner’s Office (ICO) and the Health and Safety Executive (HSE), to regulate the use of AI within their industries and sectors in line with key principles. These principles are: (i) safety, security and robustness; (ii) transparency and explainability; (iii) fairness; (iv) accountability and governance; and (v) contestability and redress.
Regulation, whether by new legislation or by empowering existing regulators, is an important way to address the risks that are inherent in AI technology. As well as complying with legal frameworks and applicable codes of practice, it is important that organisations developing or deploying AI technologies within the healthcare sector carefully consider the relevant risks and take responsibility for driving forward innovation in the sector in a sustainable, fair and accountable way.
This requires organisations to prioritise impact assessments and invest time into proper due diligence and testing, as well as procuring balanced and representative datasets for their AI models.
Additionally human intervention will continue to be important to avoid AI errors, with medical professionals continuing to play a key role in the delivery of medical care and advice, assisted by AI. Medical professionals also require proper training on how to utilise AI, both within early stage medical education and on an ongoing basis throughout their careers.
AI has the potential to improve global health and address some of the key challenges that the healthcare sector faces, for example insufficient funding, personnel shortages and growing demand (partly due to an ageing global population). Although there are significant risks associated with the use of AI in the healthcare sector, there are huge opportunities which mean that it is important that we find meaningful ways to address the risks.
For more information, please contact Hannah Pettit.