Risks of AI for the financial services sector

read time: 8 mins
23.10.24

Artificial intelligence (AI) is actively transforming delivery of financial services to society. Firms are leveraging new and emerging tools to streamline operations, design more sophisticated products and services and deliver innovative user experiences. 

However, the embrace of AI needs to be balanced with risks to all parties. This article considers the impact of AI in financial services, identifying use cases, key risks and the latest regulatory developments.

How is AI being utilised in financial services?

AI has a variety of use cases within financial services. Examples, benefiting customers, firms and regulators include:

Customers can receive more tailored and personalised products and services

AI’s ability to digest mass volumes of data (historic and in real-time) can identify customer preferences and traits, enabling more calculated product delivery, for example through credit or underwriting decisions, for a loan product or insurance policy. This could result in fairer determination of products presented to customers or a more competitive quotation and pricing process, on a more granular assessment of personal circumstances. Equally, integration of chatbot and similar customer service tools can improve user engagement and support handling of queries and management of complaints.

Firms can benefit from greater operational efficiency and risk management, which may lead to more profitable business models

AI allows for deeper process automation, for example risk and compliance processes such as AML and onboarding, fraud detection and transaction monitoring, including for safeguarding and reconciliation of client assets or monies. 

Equally, AI can assist to better aggregate and analyse data sets, which could help to monitor prudential and capital requirements, and with regulatory reporting and risk management generally. This could lead to cost savings and leaner operational models for firms.

Regulators can maintain greater oversight of the sector and markets. 

More sophisticated data analytics and gathering tools utilising AI is key here. Regulators taking a data first approach to supervision can analyse more complex datasets from firms to better identify historic or emerging regulatory trends, to forecast future risks that may affect the whole or specific areas of the sector. 

Equally, AI solutions can help from an oversight perspective, to identify and group firms operating in specific areas more thoroughly or tackle issues, such as misleading ads and content or determining if product promotion and delivery aligns with customer target market requirements.

What are the key risks for the use of AI in the financial services sector?

Notwithstanding a future where AI improves outcomes for all, it’s critically important that risks are considered, carefully. Unchecked use of AI without appropriate regulatory, and human, oversight risks significant harm to the sector. Key risks include:

Risk of bias and poor customer treatment

An AI system is only as good as the data it is trained on. A bias dataset results in a bias AI system. Customers can be highly diverse, some professional and sophisticated, others with material vulnerabilities – all with differing financial circumstances, demands and needs, risk appetite and perception of financial services. Poor training and delivery of AI solutions risks increased financial inequality and further societal imbalance, in the UK and globally.

Operational risks to firms

AI must have human oversight, regulators expect senior leadership of firms to maintain control and oversight of operations. There is a high dependency on AI technology, which isn’t fully understood or monitored by a firm risks operational failures, for example issues of accountability or failures to spot unfair and erroneous decisions. This could impact regulatory compliance and delivery of good customer outcomes.

Systemic risk to the market 

If firms become overly reliant on AI technology to operate, it leads to systemic risk. For example, if an AI software provider became insolvent or a tool utilised by a significant proportion of firms failed or delivered inaccurate or bias outcomes. This represents a key operational resilience risk to firms, and the wider market due to the deep interconnectedness of the financial system.

Fraud

Unfortunately, fraudsters and unscrupulous parties use AI to replicate legitimate firm and customer behaviours, often with the end goal of financial gain and major disruption to businesses and customers. This may result in loss due to payment of unauthorised transactions and claims, with lasting societal damage if a customer’s financial history is manipulated. 

Data privacy 

Data protection is central to AI discussions, particularly as data linked to finances can be highly sensitive and materially impact welfare if it fell into the wrong hands or were used incorrectly. There can be challenges with using AI technology in a manner that complies with applicable privacy laws, for example with: 

  1. Establishing a lawful basis for processing.
  2. Providing data subjects with transparent processing information so they understand what is happening with their personal data.
  3. Enabling data subjects to exercise their legal rights in respect of the personal data ingested by the AI system. As just one example, it may not be possible to delete or extract personal data or rectify inaccuracies if personal data has already been entered into an AI system.

Security

Security of AI systems is critical to protecting data processed by AI systems and ensuring compliance with privacy laws. Firms in the financial services sector are also subject to significant regulatory requirements in respect of operational security and related risk monitoring. For example, in the UK many firms are subject to Financial Conduct Authority (FCA) rules and guidance on operational resilience and maintaining the physical and digital security and integrity of systems, controls and processes.

AI regulation

AI regulation remains in its infancy in many jurisdictions, below we have commented on recent EU developments and the current status in the UK in respect of the financial services sector.

EU progress

The European Union (EU) published the EU Artificial Intelligence Act (EU AI Act) in the Official Journal of the EU on 12 July 2024. It entered into force on 1 August 2024, with some provisions becoming applicable within the next six to twelve months, although most applying two years after entry into force.

The EU AI Act classifies AI systems by differing risk levels as follows:

  • Minimal or low risk systems won’t be regulated.
  • Limited risk systems are subject to certain transparency obligations.
  • High risk AI systems are permitted subject to specific requirements, such as rigorous testing, transparency, data quality and accountability obligations and human oversight requirements.
  • Prohibited AI systems which are banned with some limited exceptions.

How the EU AI Act will apply to financial service firms is a complex question, with the European Commission having recently launched a consultation requesting targeted feedback from firms in the sector . Broadly, firms will likely need to meet EU AI Act requirements if utilising AI systems within scope of the same, whilst continuing to meet standards required in financial services law and regulation.

UK progress

Presently, the UK takes a principles-based approach to regulating AI. The government empowers regulators, including the UK’s FCA and Prudential Regulation Authority (PRA), to regulate AI in line with key principles:

  • Safety
  • Security and robustness
  • Transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

Related to this, the UK has published an Artificial Intelligence (Regulation) Bill (AI Bill) which proposed to introduce a new UK AI authority, however, this is ‘on hold’ following the UK’s general election earlier in 2024. The new Labour government has been silent on the future of the AI Bill, nor was it mentioned in the King’s Speech in July 2024, which sets out the UK’s legislative road map.

As a result, UK financial service firms do not have ‘AI-specific’ FCA or PRA requirements at present. Rather, firms utilising AI technologies must navigate existing financial service regulatory regimes as they apply to their activities, products and services holistically – for example considering how the use of AI may impact consumer protection, operational standards, data security and protection. We consider this makes sense, aligning with the FCA and PRA technology neutral approach to interpretation of UK financial service law and regulation. This was confirmed in recent AI updates from the regulators .

AI risk mitigation in financial services

Regulation is necessary to address AI risks, whether through targeted AI regulation, utilising existing regulatory frameworks specific to the financial service sector – or – a combination of the two.

Firms and the regulators must think long and hard about how AI technology may be implemented into operational models and decision-making, particularly in respect of the type of data that may be input into the AI technology and the reliance placed on outcomes provided.

A considered approach is also necessary for risk assessments and due diligence for AI tools and providers. Robust testing will aid determination of whether a solution is fit for purpose, delivering accurate and non-bias decisions.

Above all, human intervention and oversight remains key – the FCA and PRA have high expectations for governance in financial service firms. From top down, a firm should be able to articulate how and why it uses AI, with relevant staff receiving proper training to understand its decisioning and how to use it responsibly.

What can we expect to see in the future? 

AI will no doubt support continued growth and innovation within the sector – improving customer experience, delivering new products and services, saving costs and unlocking operational efficiencies for firms, and enable more granular and targeted oversight for regulators. 

These opportunities need to be balanced with risks associated with AI and related emerging technologies, to ensure firm and market stability and delivery of good outcomes to customers.

For further information, please contact Oliver Woodhouse, who leads our financial services regulatory practice and Hannah Pettit, who specialises in data and technology in our commercial team. 

Sign up for legal insights

We produce a range of insights and publications to help keep our clients up-to-date with legal and sector developments.  

Sign up