Continue reading on DataGuidance with:
Free Member
Limited ArticlesCreate an account to continue accessing select articles, resources, and guidance notes.
Already have an account? Log in
UK: The regulation of AI in the UK financial services sector
In this Insight article, Lara White, Hannah Meakin, Marcus Evans, Hannah McAslan-Schaaf, and Rosie Nance, from Norton Rose Fulbright LLP, explore how the UK's financial services regulators, including the Financial Conduct Authority (FCA) and the Prudential Regulation Authority (PRA), are navigating the evolving landscape of artificial intelligence (AI) through a technology-neutral, principles-based approach. They emphasize the importance of balancing AI's transformative potential with robust regulatory oversight to ensure its safe and responsible use in the sector.
Introduction
AI is revolutionizing the financial services sector. It can drive efficiency, enhance consumer experience, and improve risk management techniques. Whilst the benefits of utilizing AI can be substantial, it is essential to address the associated challenges and risks through responsible use and robust regulatory frameworks.
Regulation in the UK financial services sector is generally 'technology-neutral.' This means that instead of creating a completely new set of rules and expectations specifically for AI in financial services, the FCA, the PRA, and the Bank of England (the BoE) are adopting an outcomes - and principles-based approach to AI regulation. The previous UK Government had proposed to continue to use the existing, technology-neutral legislative framework to regulate AI. Regulators were to apply new, cross-sectoral AI principles using their existing powers. This approach contrasted with the EU's approach, where the EU AI Act sets out a new regulatory framework specifically for AI. The new Government may depart from the previous approach to some extent.
Approach to the regulation of AI under the current Government
On July 17, 2024, the King's speech set out the Government's agenda for the current Parliamentary session. The speech noted that the Government will 'seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.' However, the accompanying briefing did not provide any detail on an AI Bill. This may indicate that the Government intends to consult further on AI legislation. It remains to be seen whether the Government will continue to rely on the existing legislative framework for regulating AI beyond those developing 'the most powerful artificial intelligence models.'
The briefing included an overview of a Product Safety and Metrology Bill, which will respond to new product risks and opportunities to enable the UK to keep pace with technological advances, such as AI.
Steps taken towards the regulation of AI under the previous Government
In March 2023, the UK Government published the AI Regulation White Paper (the White Paper), setting out its proposed regulatory framework for AI. The White Paper set out a principles-based framework for existing regulators to interpret and apply within their sectors. Following the White Paper's publication, the Government hosted a consultation. It received more than 400 written responses and engaged with more than 300 roundtable and workshop participants. After considering this feedback, the Government published its response to the consultation on February 6, 2024. In the response paper, the Government confirmed its plans to introduce five cross-sectoral principles for existing regulators to interpret and apply within their remits in order to drive safe, responsible AI innovation:
- Safety, security, and robustness;
- Appropriate transparency and explainability;
- Fairness;
- Accountability and governance; and
- Contestability and redress.
These cross-sectoral principles were to be combined with a context-specific framework, international leadership and collaboration, and voluntary measures on developers, with the aim of allowing regulation to keep pace with rapid and uncertain advances in AI. More specifically, other aspects of the regulatory framework that were outlined in the White Paper and discussed further in the response paper include:
- a statutory duty requiring regulators to have due regard to the cross-sectoral principles;
- new central functions that focus on coherence across the regulatory landscape, cross-sectoral risk, and monitoring and evaluation;
- additional education and awareness support for consumers, businesses, and regulators;
- the allocation of legal responsibility for AI throughout the value chain;
- approaches to the regulation of foundation models; and
- an AI regulatory sandbox.
The response paper noted that the Government is continuing to work closely with regulators to develop the regulatory framework for AI, ensure coherent implementation, and build regulator capability. The Government published initial guidance to regulators alongside the response paper, on how to apply the cross-sector AI principles within their existing remits, and intends to update this guidance over time to ensure it reflects developments in the regime and in technological advances in AI.
The Government also flagged that the challenges posed by AI technologies will ultimately require legislative action in every country once the understanding of risk has matured. The response paper therefore sets out the Government's early thinking and the questions it will need to consider, for the next stage of its regulatory approach.
The Government asked several regulators to publish an update outlining their strategic approach to AI by April 30, 2024.
The Artificial Intelligence (Regulation) Private Members' Bill (the Bill) was introduced in the House of Lords in November 2023 and focused on establishing a framework for the regulation of AI in the UK, including placing AI regulatory principles on a statutory footing and establishing a central AI authority to oversee the regulatory approach to AI. The Bill had completed its progress through the Lords by June 2024, but had not started its progress through the House of Commons and has now fallen. While private members' bills typically do not become law, the Bill had been a potential avenue for meaningful Parliamentary debate on the regulation of AI. Lord Holmes, who introduced the Bill, has indicated that he plans to reintroduce it in this Parliamentary session.
FCA AI update
In response to the Government's White Paper and the Government's call for updates from regulators on their strategic approach, the FCA published its AI Update on April 22, 2024, clarifying its approach to the regulation and supervision of AI. In particular, the FCA noted the following:
- the FCA's rules, regulations, and core principles do not usually mandate or prohibit specific technologies. Rather, the FCA's regulatory approach is to identify and mitigate risks to its objectives, including from regulated firms' reliance on different technologies, and the harms these could potentially create for consumers and financial markets;
- the importance of proportionality in its regulatory approach; and
- UK regulation is based on an outcomes-focused approach, which gives firms greater flexibility and also enables the regulation to be applied more easily to technological changes and market developments.
The FCA also explained how some of the key elements of its existing regulatory framework map to each of the five principles set out in the Government's papers (described above):
- Safety, security, and robustness - A variety of high-level principle-based rules, along with more detailed regulations and guidance, are pertinent to a firm's safe, secure, and robust use of AI systems in the UK financial services sector. For instance, according to the FCA's Principles for Business, firms are required to conduct their operations with due skill, care, and diligence, and to manage their affairs responsibly and effectively, with adequate risk management systems in place. Additionally, various FCA threshold conditions are relevant, particularly the requirement for a firm's business model to be appropriate. Beyond these broad, overarching requirements, there are specific rules and guidelines related to systems and controls outlined in the Senior Management Arrangements, Systems, and Controls (SYSC) sourcebook, which apply to various categories of firms. This includes provisions for risk controls under SYSC 7 and general organizational requirements under SYSC 4. The FCA's focus on operational resilience, outsourcing, and critical third parties is also especially relevant to the principles of safety, security, and robustness. The requirements under SYSC 15A (Operational Resilience) are designed to ensure that relevant firms can respond to, recover from, learn from, and prevent future operational disruptions.
- Appropriate transparency and explainability - Whilst the FCA's regulatory framework does not specifically mandate transparency or explainability for AI systems, it does include several high-level requirements and principles related to consumer protection. These are pertinent to the information that firms provide to consumers and are relevant to the safe and responsible use of AI in financial services. Notably, under the Consumer Duty, there is an overarching obligation to act in good faith, which entails honesty and fair, transparent dealings with retail consumers, as outlined in PRIN 2A.2.2R. Additionally, related rules under the Consumer Duty emphasize the need to cater to the information needs of retail customers, ensuring they are equipped to make well-informed, timely, and effective decisions.
- Fairness - The FCA's regulatory approach to consumer protection is particularly relevant for ensuring fairness in firms' use of safe AI systems. This approach is based on a combination of the FCA's Principles for Businesses, other high-level rules, detailed regulations, and guidance, including the Consumer Duty. The FCA's Principles for Businesses are crucial in this context—for instance, Principle 8 on managing conflicts of interest and Principle 9 on the suitability of advice and discretionary decisions are significant considerations for firms. Additionally, several of the FCA's Threshold Conditions are pertinent to the fair use of AI by firms, particularly those related to a firm's suitability and business model, which encompass the consideration of consumer interests. There are also specific rules and guidance in various chapters of the FCA Handbook related to consumer protection that influence firms' safe and responsible use of AI. Furthermore, firms using AI systems that process personal data must comply with data protection legislation, including the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018.
- Accountability and governance - The FCA's regulatory framework includes various rules and guidance related to firms' governance and accountability arrangements, which are pertinent for firms utilizing AI safely and responsibly within their business models. This framework encompasses high-level rules and principles, such as specific FCA Threshold Conditions and the Principles for Business, notably Principle 3 on Management and Control. Additionally, the SYSC sourcebook offers detailed provisions on systems and controls, as well as governance processes and accountability arrangements for firms.
- Contestability and redress - Firms that incorporate AI into their business operations are still responsible for ensuring compliance with FCA rules, including those related to consumer protection. If the use of AI leads to a breach of these rules, such as when AI systems generate decisions or outcomes that harm consumers, firms can be held accountable through various mechanisms, and consumers have avenues for redress. Firms must have robust complaints handling procedures in place to ensure that grievances, including those related to AI decisions in financial services, are addressed fairly and promptly. Chapter 1 of the 'Dispute Resolution: Complaints' Sourcebook provides rules and guidance on handling complaints. If consumers are dissatisfied with a firm's internal resolution, they can refer the issue to the Financial Ombudsman Service for an independent review at no cost, which can award compensation when appropriate. Additionally, depending on the nature of the breach, redress may be available through voluntary or mandatory firm-led redress schemes and the Financial Services Compensation Scheme.
In addition, in the AI Update, the FCA provided an indication of the work it has planned for the next 12 months. Of particular note are the following priorities which the FCA has identified for itself:
- Continuing to further its understanding of AI deployment in UK financial markets - the FCA aims to ensure that any future regulatory adjustments are proportionate to the risks created by the use of AI, whilst also fostering a framework that supports beneficial innovation. The goal is to continue to build a comprehensive understanding of how AI is being used in UK financial markets, ensuring that any future regulatory measures are not only effective but also balanced and supportive of innovation. This will also allow the FCA to promptly address emerging issues at specific firms from a supervisory perspective. Currently, the FCA is engaged in diagnostic work related to the deployment of AI across UK financial markets. It is also conducting a third edition of the machine learning survey, in collaboration with the BoE, and working with the Payment Services Regulator to examine AI across various system areas.
- Building on existing foundations - the existing regulatory framework covers firms' use of technology which aligns and supports the Government's AI principles in many ways. The FCA continues to monitor the situation and may actively consider future regulatory adaptions if needed. Recent developments, such as large language models, have again put resilience at the heart of what the FCA does. Regimes relating to operational resilience, outsourcing, and critical third parties will become more central to the FCA's analysis as they will have increasing relevance to firms' safe and responsible use of AI and will feed into lessons from a better understanding of AI deployment in UK financial markets.
- Collaboration - collaboration is crucial for establishing a consensus on best practices and potential future regulatory initiatives, as well as for building empirical understanding and gathering intelligence. The FCA will continue to work closely with other regulators through its membership in the Digital Regulation Cooperation Forum (DRCF). The DRCF has four members - the Competition and Markets Authority, Ofcom, the Information Commissioner's Office and the FCA. It aims to deliver a coherent approach to digital regulation for the benefit of people and businesses online. This also includes active engagement with regulated firms, civil society, academia, and international counterparts.
- International coordination - the FCA has prioritized domestic and international engagement and collaboration on AI, with the need for global alignment and standardization on how to best regulate AI. The FCA is closely involved with the International Organization of Securities Commissions, including the AI working group, supporting the work of the Financial Stability Board. The FCA is also a core participant in other multilateral forums on AI, including the Organization for Economic Co-operation and Development, the Global Financial Innovation Network, and the G7.
- Testing for beneficial AI - the FCA will work with DRCF member regulators to deliver the pilot AI and Digital Hub, exploring changes to their innovation services, and assessing opportunities for their AI Sandbox.
- Using AI in the FCA's own regulatory activities - investing in advanced models to detect fraud, scams, and market abuse, and exploring potential use cases involving natural language processing, synthetic data, and large language models.
- Looking towards the future - conducting research on emerging technologies, such as deepfakes and quantum computing, and responding to the data asymmetry between Big Tech and traditional financial services firms.
The BoE/PRA response letter
In response to the Government's White Paper, the BoE and the PRA published a letter on April 22, 2024, clarifying the BoE/PRA's approach to the regulation and supervision of AI (the Letter).
The Letter clarified that the primary focus of the BoE and PRA is to maintain financial stability and ensure that regulated firms practice safe and sound operations. They are assessing how to facilitate AI/machine learning (ML) adoption in areas such as data management, model risk management, governance, and operational resilience. Further consultations and a survey are planned to deepen their understanding of AI/ML integration.
Ongoing collaboration with the FCA and other regulatory bodies aims to establish a unified approach to AI/ML. The BoE endorses the Government's principles for AI regulation, which highlight innovation, proportionate regulation, and cross-regulatory cooperation. The Letter concludes by reaffirming the BoE's dedication to a regulatory framework that balances the benefits and risks of AI, aligned with its statutory objectives.
In terms of how key elements of the PRA's existing regulatory framework map to each of the five principles set out in the Government's papers:
- Safety, security, and robustness - the PRA's approach is primarily contained in PRA SS 2/21 (Outsourcing and Third-Party Risk Management). AI needs to be considered through this lens, to ensure that relevant risks are effectively identified and mitigated.
- Appropriate transparency and explainability - the PRA's approach is primarily contained in PRA SS 1/23 (Model Risk Management Principles for Banks). The principles set out in this paper need to be applied to AI.
- Fairness - whilst the PRA acknowledges that this principle is more relevant in a consumer context (i.e. to the FCA), where fairness is relevant to the PRA's remit, it expects firms to define fairness for themselves, and to act in an appropriate way. Where relevant, AI also needs to be considered from a fairness lens.
- Accountability and governance - the UK's Senior Managers and Certification Regime (SMCR) establishes a framework through which senior individuals are required to take responsibility for the firm's business and its compliance with regulatory requirements. Individual accountability is key to this. In the future, the parameters of individual responsibility and accountability will need to include AI.
- Contestability and redress - the PRA's expectation is that this may be more relevant to the FCA as it primarily applies in a consumer-facing context.
Similarities and divergences between FCA and BoE/PRA approaches
Both the FCA and the BoE/PRA express their support for the Government's principles-based, technology-neutral, and sector-specific approach to regulation. They emphasize that these cross-sectoral principles align with their regulatory strategies and assert that their existing regulatory frameworks are well-suited to foster AI innovation in a manner that benefits the financial services industry whilst effectively managing associated risks. It is interesting to note that these regulators are also using AI for certain purposes within their own organizations.
There are, however, a few differences in approach:
- The FCA acknowledges that its regulatory stance must evolve to keep pace with the rapid advancement, scale, and complexity of AI. There is a need for increased emphasis on the rigorous testing, validation, and comprehension of AI models, along with robust accountability mechanisms.
- Conversely, the PRA has highlighted the potential implications of widespread AI adoption in financial services, particularly concerning financial stability. This adoption could potentially pose challenges to their statutory objectives. Consequently, the PRA intends to conduct a comprehensive analysis throughout 2024, the findings of which will be reviewed by the BoE Financial Policy Committee.
Ultimately, the FCA and PRA approaches to regulation are intended to be flexible, collaborative, and forward-looking. By focusing on principles and maintaining a technology-neutral stance, their aim is to support the safe and responsible adoption of AI, ensuring that it brings benefits to the industry and consumers while mitigating potential risks.
Looking towards the future
There is no doubt that AI is revolutionizing the financial services sector at an unprecedented pace. The financial services sector needs to be forward-looking and collaborate with a diverse set of stakeholders to identify potential novel risks and opportunities. Firms and regulators will need to continue to understand new technologies as they develop, and regulation will need to keep pace with technological change.
Lara White Partner
[email protected]
Hannah Meakin Partner
[email protected]
Marcus Evans Partner
[email protected]
Hannah McAslan-Schaaf Counsel
[email protected]
Rosie Nance Senior Knowledge Lawyer
[email protected]
Norton Rose Fulbright LLP, London