Support Centre

UK

Summary

Law: Data Protection Act 2018 (the Data Protection Act) and the UK General Data Protection Regulation (Regulation (EU) 2016/679) (UK GDPR)

Regulator: The Information Commissioner's Office (ICO)

Summary: As the UK is no longer an EU Member State, from January 1, 2021, the UK's data protection regime has been regulated by the Data Protection Act 2018 (the Data Protection Act) and the UK GDPR, which is broadly similar to the EU GDPR. As a result, the European Commission adopted two adequacy decisions for the UK, one under the GDPR and one under the Data Protection Directive with Respect to Law Enforcement (Directive (EU) 2016/680).

In addition, on September 21, 2023, the Department of Science, Innovation and Technology (DSIT) published the Data Protection (Adequacy) (United States of America) Regulations 2023, also known as the UK-US Data Bridge, for the UK Extension to the EU-US Data Privacy Framework, designating the US as a jurisdiction that ensures an adequate level of personal data protection for data transfers in specified circumstances.

Insights

The Data (Use and Access) Bill was introduced to the House of Lords of the UK Parliament on October 3, 2024. The Bill aims to amend the UK's data protection regime by including provisions on recognized legitimate interests for lawful processing, automated decision-making, international data transfers, and cookies.

OneTrust DataGuidance Research provides an overview of the Bill, with expert insights by Philip James, Partner at Eversheds Sutherland's Global Privacy & Cybersecurity Group and AI Task Force, and Victoria Hordern, Partner at Taylor Wessing.

The UK Government plans to introduce new cybersecurity legislation across the UK in the form of the Cyber Security and Resilience Bill (the Bill). The Bill's objective is to improve the law around cybersecurity generally to make the UK's critical and cyber infrastructure more resilient in the face of frequent and damaging cyberattacks. Neil Williamson and Colin Lambertus, from EM Law, delve into the Bill and how it may be influenced by other developments in cybersecurity regulation, including the consultation on the NIS Regulations and the NIS 2 Directive.

With the full entry into force of the EU's Digital Services Act (DSA) in February 2024, coupled with the EU's efforts to regulate artificial intelligence (AI) (culminating in the recently passed EU AI Act), businesses have been grappling with the compliance challenges posed by both of these regimes, against the backdrop of the General Data Protection Regulation (GDPR) and the rapidity with which tech companies have been innovating and launching AI tools in the UK and EU.

The genesis of the DSA predates the tsunami-like wave of AI innovation and adoption but has very important consequences for those larger businesses already caught in the crosshairs of the DSA who are now also bringing powerful new AI tools to market. In this article, Geraldine Scali and Anna Blest, from Bryan Cave Leighton Paisner LLP, explore how users and deployers of AI will need to consider the impact of the DSA alongside existing compliance obligations in the GDPR, whilst readying themselves for the staged implementation of the EU's AI Act.

India's commitment towards the promotion and development of artificial intelligence (AI) was recently highlighted in the Union Budget of 2024-25 that was announced by the Indian government in July 2024. The Budget allocated $65 million exclusively to the IndiaAI Mission, an ambitious $1.1. billion program that was announced earlier this year to focus on AI research and infrastructure in India. It has also widely been reported that the Ministry of Electronics and Information Technology (MeitY) is in the process of formulating a national AI policy, which is set to address a wide spectrum of issues including the infringement of intellectual property rights and the development of responsible AI. As per reports, MeitY is also analyzing the AI framework of other jurisdictions to include learnings from these frameworks in its national AI policy. Part I of this series focussed on understanding the regulatory approaches adopted by some key jurisdictions like the EU and the USA. In Part two, Raghav Muthanna, Avimukt Dar, and Himangini Mishra, from INDUSLAW, explore measures that India can adopt, and lessons it can take from such markets, in its journey in the governance of AI systems.

In this Insight article, Lara White, Hannah Meakin, Marcus Evans, Hannah McAslan-Schaaf, and Rosie Nance, from Norton Rose Fulbright LLP, explore how the UK's financial services regulators, including the Financial Conduct Authority (FCA) and the Prudential Regulation Authority (PRA), are navigating the evolving landscape of artificial intelligence (AI) through a technology-neutral, principles-based approach. They emphasize the importance of balancing AI's transformative potential with robust regulatory oversight to ensure its safe and responsible use in the sector.

The Information Commissioner's Office (ICO) published guidance on the use of artificial intelligence (AI), recognizing risks and that responsible deployment of AI has the potential to make a positive contribution to society.

Part one of this Insight series discusses chapter one of the ICO's guidance on the lawful basis for web scraping, part two focuses on chapter two on the application of the purpose limitation principle to different phases of the generative AI lifecycle discussed in chapter two of the guidance, and part three explores the ICO's third chapter concerning the accuracy of data and outputs. In part four, James Castro-Edwards, from Arnold & Porter, delves into chapter four of the guidance on individual rights, particularly in the stages of training and fine-tuning generative AI.

As we navigate the complexities and opportunities of 2024, the UK remains at the forefront of artificial intelligence (AI) innovation. The country's commitment to harnessing AI is underpinned by a robust strategic framework, reflecting both the ambitious goals and pragmatic steps necessary to maintain its leadership in this rapidly evolving field. In this Insight article, Tahir Latif, Global Practice Lead for Data Privacy & Responsible AI at Cognizant, delineates the critical AI priorities for the UK in 2024, focusing on regulatory frameworks, ethical considerations, innovation, and practical implementation.

The Information Commissioner's Office (ICO) has published a series of chapters on its interpretation of the UK General Data Protection Regulation (UK GDPR) and Part 2 of the Data Protection Act 2018 regarding the use, risks, and responsible deployment of artificial intelligence (AI).

Part one of this Insight series focused on chapter one of the ICO's guidance on the lawful basis for web scraping, while part two looked at how the purpose limitation principle of the UK GDPR applies to different phases of the generative AI lifecycle discussed in chapter two of the guidance. In part three, James Castro-Edwards, from Arnold & Porter, explores the ICO's third chapter concerning the accuracy of data and outputs.

Earlier this year, the UK's Information Commissioner's Office (ICO) released its much-anticipated updated guidance on the calculation of fines for data protection non-compliance under the UK General Data Protection Regulation (UK GDPR) and Data Protection Act 2018 (the Act). With organizations seeking clarity on their financial exposure in the event of a data breach or non-compliance, this ICO guidance supersedes aspects of the ICO's historic Regulatory Action Policy and aims to demystify the fine calculation process, providing a roadmap for organizations to understand the potential financial impact of non-compliance. The guidance sheds light on the ICO's methodology for setting fines, describing the circumstances in which the ICO may exercise administrative discretion to issue a penalty notice. This move comes after similar guidelines were finalized and released in 2023 by the European Data Protection Board (EDPB), which sought to standardize the fining power available under GDPR across the EU. James Lloyd and Sami Martin Qureshi, from Latham & Watkins LLP, discuss the new guidelines and key takeaways for organizations subject to the UK GDPR.

The Information Commissioner's Office (ICO) published a series of chapters highlighting its emerging views on its interpretation of the UK General Data Protection Regulation (GDPR) and Part 2 of the Data Protection Act 2018, in relation to questions around the use, risks, and responsible deployment of artificial intelligence (AI). 

Part one of this Insight series focused on chapter one of the ICO's guidance on the lawful basis for web scraping. In part two, James Castro-Edwards, from Arnold & Porter, looks at chapter two of the guidance which discusses how the purpose limitation principle of the UK GDPR applies to different phases of the generative AI lifecycle.   

The Information Commissioner's Office (ICO), the UK data protection authority responsible for enforcing the UK General Data Protection Regulation (UK GDPR), announced earlier this year its series of consultations on how aspects of data protection law should apply to the development and use of generative artificial intelligence (AI) models. The term 'generative AI' refers to AI models that create new content, which includes text, audio, images, or videos. The ICO recognizes that responsible deployment of AI has the potential to make a positive contribution to society, and intends to address any risks so that organizations and the public may reap the benefits generative AI offers.

The ICO guidance responds to a number of requests for clarification made by innovators in the AI field, including the appropriate lawful basis for training generative AI models, how the purpose limitation principle plays out in the context of generative AI development and deployment, and the expectations around complying with the accuracy principle and data subjects' rights

The ICO has published a series of chapters, which outline its emerging views on its interpretation of the UK GDPR and Part 2 of the Data Protection Act 2018, in relation to these questions. The ICO is in the process of seeking the views of stakeholders with an interest in generative AI to help inform its positions. In part one of this Insight series, James Castro-Edwards, from Arnold & Porter, delves into chapter one of the ICO's guidance, focusing on legitimate interests as a lawful basis, the risks involved in web scraping, and measures that developers can take to mitigate such risks.

Generative artificial intelligence (AI) models, that is to say, AI models capable of generating text, images, code, audio, video, and other content as part of their output in response to inputs or prompts, such as OpenAI's ChatGPT and Dall-E, Meta's Llama, and Google's Imagen (accessed via Gemini), require significant volumes of high-quality data in order to train the model and enable it to assimilate the information and refine its output, through an iterative process. Generative AI models do not 'memorize' or recount their training data, per se, but instead learn to predict the appropriate output based on probabilities having regard to patterns in training data.

According to OpenAI, ChatGPT was developed using 'three primary sources of information:' publicly available information on the internet, information licensed from third parties, and information provided by users or human trainers. Meta's Llama 2 was similarly 'pretrained on publicly available online data sources' and trained on '2 trillion tokens,' which are the units of data into which training data is split whereby each word, punctuation mark, or pixel, for example, would constitute a separate token. Both developers state that they either did not intentionally target for, or sought to remove from, training data sources with high volumes of personal data. The process of gathering or extracting, through the use of an automated tool or bot, data from websites, known as web scraping, of publicly available data including personal data has legal implications for website operators, developers of AI models, their deployers, and data subjects. Nicola Cain, of Handley Gill Limited, discusses these legal implications for all individuals involved in web scraping data.