The ICO’s strategic approach to regulating AI: what it means for your business

read time: 4 mins
29.05.24

The Information Commissioner’s Office (ICO), responsible for promoting and enforcing the UK’s privacy legislation, has recently released their strategy document, ‘Regulating AI: the ICO’s strategic approach’, outlining their strategic approach to regulating artificial intelligence (AI). This article reviews the key aspects of the ICO’s strategy document, highlighting the guidance and advice available to businesses and the upcoming developments regarding AI regulation.

The ICO’s approach to regulating AI

Whether your organisation is a developer of AI systems, having control over their design, or a deployer of AI systems, using AI systems in its business, it is important to be aware of the ICO’s strategic approach to regulating AI. This is to ensure ongoing compliance with data protection laws, foster trust and accountability, manage risks, adopt best practices and help your organisation prepare for future public and regulatory scrutiny.

The ICO’s strategy opens by outlining the opportunities and risks of AI and how the ICO considers itself to be at the coalface of regulating emerging technologies such as AI, both now and in the future, given that data protection law is technology-neutral, applying to any processing of personal data no matter the technology involved. As a result, the ICO does not recommend new legislation to mitigate the risks relating to AI, but instead argues that the existing UK regulators need to be empowered to sufficiently hold organisations to account. 

Guidance on AI and data protection

However, as highlighted in the government’s white paper ‘A pro-innovation approach to AI regulation’, there is concern across industry that the absence of cross-cutting AI regulation creates uncertainty and inconsistency which can undermine business and consumer confidence in AI, resulting in a stifling of innovation. Despite this concern, in the second part of ICO’s strategy, the ICO outlines that as the existing principles of data protection law already mirror the principles set out in the government’s white paper, further legislation and/or regulation is not necessary.

We will have to wait and see whether the government decides to introduce free-standing legislation to regulate AI, such as adopting a similar approach to the EU with its EU AI Act. In the meantime, and even if new legislation is enacted, AI developers and deployers need to ensure compliance with data protection law and they will be well served by following the ICO’s guidance on AI and data protection.

The ICO’s strategy serves as a helpful reminder to organisations in the AI marketplace about the seven key principles under data protection law, how they relate to AI systems, and what guidance is available to ensure compliance with the principles. For example, in relation to the principle of lawfulness, fairness and transparency, the ICO reminds organisations about its helpful tool explaining decisions made with AI, co-developed with the Alan Turing Institute. In addition, in relation to the principle of accountability, the ICO has outlined in its strategy that it intends to further clarify the responsibilities of AI developers and deployers as part of the ICO’s generative AI consultation series.

Guidance, advice and support for AI innovators

The third part of the ICO’s strategy reminds organisations about the ICO’s work on AI, and the guidance, advice and support they offer to AI innovators. This includes the ICO’s AI and data protection risk toolkit and also their regulatory sandbox, which aims to provide in-depth support to organisations who develop products and services which use data in innovative and novel ways.

However, a significant part of the ICO’s work necessarily involves taking regulatory action to enforce the law and safeguard people from harm. Therefore, the ICO’s strategy also reminds organisations about what regulatory action could be taken against them if they fail to comply with data protection law. It highlights the different types of notices that the ICO could issue for example:

A recent example of a monetary penalty notice is the ICO’s regulatory action against Clearview AI, Inc who were fined over £7.5 million for collecting billions of images of people’s faces and data from the internet to create an online database for facial recognition purposes, without the relevant people’s knowledge. This is a salient reminder of the implications of failing to comply with data protection law when developing AI systems.

Upcoming developments

The fourth part of the ICO’s strategy outlines upcoming developments. The ICO sets out its three focus areas for 2024/25 as being AI’s application in biometric technologies, protecting children’s privacy and online tracking. In relation to AI’s application in biometric technologies, the ICO plans to consult on how biometric classification technologies, such as those used to draw inferences about people’s emotions or characteristics, should be developed and deployed.

The ICO’s strategy illustrates the ICO’s crucial role in regulating the use of AI alongside other UK regulators, together with the importance of data protection compliance when developing or deploying any new AI technology. The ICO has shown that it is committed to educating and assisting organisations to meet their regulatory obligations and utilise AI responsibly. It is important that organisations refer to the ICO’s various pieces of AI guidance, including those detailed above, to assist them with achieving data protection compliance, especially as AI technologies and use-cases continue to develop at a fast pace. 

For more information please contact our privacy and data team

Sign up for legal insights

We produce a range of insights and publications to help keep our clients up-to-date with legal and sector developments.  

Sign up