Generative artificial intelligence (AI) models, such as OpenAI’s ChatGPT, are a hot topic. Generative AI is AI which is able to generate content (for example text, images or audio) as an output, in response to the input data that is fed to the model. This input data is also commonly referred to as training data, as it is this data which trains the AI model.
Utilising AI has clear benefits, for example increasing efficiency and cost savings. However, there are also wide ranging concerns about the negative impacts for individuals and society as a whole. These include privacy, confidentiality and intellectual property infringement risks, as well as ethical concerns arising as a result of AI biases and the economic harm of job roles being replaced. Not to mention the warnings issued by the likes of Sundar Pichai and Geoffrey Hinton from Google, regarding the dangers of AI due to the speed at which it is moving and the possibility that AI models could one day become more intelligent than us.
While businesses need to be alive to the wider considerations mentioned above, in this article we focus specifically on the privacy considerations for those looking to utilise generative AI tools such as ChatGPT.
On 31 March 2023, the Italian Data Protection Authority, the Garante, issued a temporary ban on ChatGPT. Its key concerns were that insufficient information had been provided to individuals about OpenAI’s use of their personal data, OpenAI has no lawful basis for using vast volumes of personal data to train ChatGPT and ChatGPT can produce inaccurate information about people.
A month later and the temporary ban has been lifted due to OpenAI co-operating and responding to the Garante’s objections. In particular OpenAI has improved its privacy policy, enabled data subjects to request deletion of information they consider to be inaccurate and also confirmed that data subjects were able to opt-out of their information being used to train the AI model by completing an online form.
Whilst these measures do go some way to improving the protection afforded to data subjects, they have not resolved wider privacy concerns surrounding generative AI. Generative AI models have been trained using publicly available data, including personal data which is likely to have been collected unlawfully, and therefore it is no surprise that these tools will be subject to extensive regulator scrutiny.
Other European privacy regulators have stopped short of implementing bans on ChatGPT but have requested further information from OpenAI. This therefore won’t be the last that we hear about ChatGPT’s privacy compliance. The European Data Protection Board has also set up a task force to co-ordinate enforcement action in respect of ChatGPT across European privacy regulators.
Meanwhile, in the UK, the Information Commissioner’s Office (ICO) has issued guidance regarding responsible use of generative AI which aligns with the concerns of European privacy regulators. The ICO has confirmed that it will be asking questions of organisations that are developing or using generative AI and that it will take action where organisations are not following the law or considering the impact on individuals.
There is no AI-specific privacy regime. Businesses simply need to apply existing data protection obligations to AI-based data processing; this is what the ICO expects to see when organisations are either developing or using generative AI.
It is easy to see why there is significant pressure for meaningful AI regulation. Law makers have struggled to keep pace with fast developments within the AI sector, however an AI regulatory framework is on the horizon both within the UK and EU.
For now though, where privacy is concerned, businesses need to focus on ensuring that their use of AI tools complies with applicable data protection laws, whilst also keeping abreast of the outcomes of privacy regulator investigations into generative AI tools such as ChatGPT.
For more information on the article above contact Hannah Pettit or Suzie Miles.