Support Centre

EU

Summary

Law: General Data Protection Regulation (Regulation (EU) 2016/679) (GDPR)

Regulator: The European Data Protection Supervisor (EDPS) is the European Union's (EU) data protection authority and monitors privacy within EU institutions and bodies. The European Data Protection Board (EDPB) is an independent European body composed of representatives of the national data protection authorities and the EDPS.

Summary: The GDPR was approved on May 24, 2016, and became directly applicable in the EU Member States on May 25, 2018. It has since inspired several other privacy laws around the world. The GDPR lays down rules relating to the processing of personal data aimed at protecting natural persons, as well as provisions on the free movement of personal data. The GDPR, although a European regulation, has a broad scope of application that imposes direct statutory obligations on data processors and can affect controllers established outside the EU.

Parallel to the GDPR, the Data Protection Law Enforcement Directive (Directive (EU) 2016/680) (LED) entered into force on May 5, 2016. As a directive rather than a regulation, EU Member States had to transpose the LED into their national law by May 6, 2018. The LED deals with the processing of personal data by data controllers for law enforcement purposes, which falls outside of the scope of the GDPR.

The EU has also established further pieces of legislation with substantive importance within the Digital Single Market. In particular, the ePrivacy Directive entered into force on July 31, 2002, with the date of transposition into national law by EU Member States set to October 31, 2003. The ePrivacy Directive regulates the processing of personal data and the protection of privacy in the electronic communications sector, with specific reference to the regulation of unsolicited communications and cookies and similar technologies. In January 2017, a proposal to revise the ePrivacy Directive was presented by the European Commission, which seeks to replace the ePrivacy Directive and replace it with the Proposal for a Regulation Concerning the Respect for Private Life and the Protection of Personal Data in Electronic Communications and repealing Directive 2002/58/EC (Regulation on Privacy and Electronic Communications) (the Draft ePrivacy Regulation). Negotiations on the terms of the final text are ongoing between the Council of the European Union and the European Parliament.

As part of its digital strategy, the EU adopted a suite of new era digital legislation:

Other pieces of digital-related legislation are still under negotiation:

Furthermore, on January 16, 2023, the NIS 2 Directive entered into force, repealing the pre-existing NIS Directive starting from October 18, 2024. EU Member States have until October 17, 2024, to transpose the NIS 2 Directive into national law. Building on the NIS Directive, the NIS 2 Directive imposes new and enhanced cybersecurity-related obligations on companies and other private or public entities in certain sectors.

Lastly, the Whistleblowing Directive (the Whistleblowing Directive) entered into force on December 16, 2019 and was required to be transposed by EU Member States by December 17, 2021. The Whistleblowing Directive provides for rules that enable whistleblowers to report breaches of EU law without fear of retaliation.

Insights

In this Insight article, Dr. Mira Suleimenova, from Jentis GmbH, discusses the role of synthetic data in online marketing to protect users' privacy and comply with the relevant EU legislation.

The Network and Information Security Directive (EU) 2022/2555) (NIS 2 Directive) is a significant new law enacted to bolster cybersecurity across the European Union. It is an update of the original NIS Directive (Directive (EU) 2016/1148), which was adopted to address the increasing threats to network and information security. The aim of the NIS 2 Directive is to further harmonize, benchmark, and enhance cybersecurity measures that apply to network and information systems across the EU. The NIS 2 Directive seeks to create a more robust cybersecurity regulatory framework.

These new cybersecurity rules have been introduced as a directive, meaning that each Member State must enact laws reflecting these new rules and empowering regulators in their countries to supervise and enforce these laws. Individual countries can add to the NIS 2 Directive rules, provided any additional rules or requirements introduced are consistent with the Directive. The specific requirements and enforcement mechanisms can and do vary between Member States.

In this Insight article, Deirdre Kilroy, from Bird & Bird LLP, discusses the key elements of the NIS 2 Directive and how in-scope entities can ensure compliance.

We are all aware artificial intelligence (AI) systems and AI-powered products are categorized as products, and that is what was respected in the EU Artificial Intelligence Act (the AI Act). AI is being deployed in multiple sectors of society and becoming an essential component for products, without which these would not be able to operate, but also AI applications have not fallen short of accidents and misuses.

In this Insight article, Spiros Tassis and Paolo Quattrone, from POTAMITISVEKRIS Law Partnership, will try to explain how the current and the scheduled EU product liability framework will interact with AI systems and the AI Act and explore how the AI Act Regulation deals with the topic of product liability. This is done to determine what is currently leading the EU's choices. Additionally, to understand the relationship of the AI Act with product liability, we shall explore the changes in the legislative landscape of the Union and the threat AI poses to consumers, as well as the focus points the EU Commission should consider.

Laws governing technology have historically focused on the regulation of information privacy and digital communications. However, governments and regulators around the globe have increasingly turned their attention to artificial intelligence (AI) systems. As the use of AI becomes more widespread and AI changes how business is conducted across industries, there are signs that existing declarations of principles and ethical frameworks for AI, and the first AI regulations (including those established in the EU), may soon be followed by other AI-specific legal frameworks in other jurisdictions1.

On June 16, 2022, the Canadian Government tabled Bill C-27, the Digital Charter Implementation Act, 2022. Bill C-27 proposes to enact, among other things, the Artificial Intelligence and Data Act (AIDA). In this Insight, Christopher Ferguson, Summer Lewis, and Dongwoo Kim, from Fasken Martineau DuMoulin LLP, provide a comparison between AIDA and the EU's Artificial Intelligence Act (the EU AI Act), looking specifically at both laws' approach to key definitions, the use of data, requirements for AI systems, and penalties, among other things2.

ISO 42005 is an emerging standard poised to play a pivotal role in the global artificial intelligence (AI) governance ecosystem. As AI continues to rapidly evolve, businesses face increasing pressure to align with regulations, standards, and best practices, ensuring ethical, transparent, and risk-conscious AI deployments. In this Insight article, Sean Musch, CEO of AI & Partners, and Charles Kerrigan, Partner at CMS Cameron McKenna Nabarro Olswang LLP, aim to help businesses understand ISO 42005, its significance, and how they can integrate it into their operations to stay ahead of regulatory demands and competitive pressures.

In this Insight article, Mark Lubbock and Ellen Keenan-O'Malley, from EIP, explore the EU's regulatory challenges with artificial intelligence (AI), highlighting new laws and concerns over data, competition, and innovation. It also examines the risks of overregulation on Europe's tech competitiveness.

In recent months, understanding the new data protection obligations arising from legislation being approved across the EU has been a priority for many. A specific focus has been the Artificial Intelligence Act (the AI Act) and its interplay with the General Data Protection Regulation (GDPR). Both EU regulations uphold common principles such as transparency and fairness. Human intervention is considered a relevant means to mitigate potential harm. Additionally, the concept of risk-based assessments is emphasized as the best approach to balancing what is acceptable and what is not before a particular application is implemented.

In this Insight article, Sofia Calado, from Cloudflare, discusses the obligations and guidelines regarding assessments of artificial intelligence (AI) systems and models under the AI Act and the GDPR.

The EU Artificial Intelligence Act (the AI Act) is set to become a landmark regulation governing artificial intelligence (AI). It introduces requirements and responsibilities for providers (and those treated as providers) of general-purpose AI (GPAI) models. With respect to GPAI models, a provider is a natural or legal person or body that develops or has a GPAI model developed and places that model on the market under its own name or trademark, whether for payment or free of charge. This includes organizations that outsource the development of a GPAI model and then place it on the market.

The concept of GPAI models was not in the original text of the AI Act when it was first proposed in 2021. But articles and soon a chapter on GPAI models were added after the proliferation of models like OpenAI's GPT-3, generating much debate during negotiations for the Act.

In this Insight article, Katie Hewson and Eva Lu, from Stephenson Harwood LLP, examine the definition of GPAI models under the Act, as well as the sub-category of GPAI models with systemic risk and the obligations of providers of these models.

The use of dashcams is on the rise in Europe which raises data protection concerns surrounding the collection and processing of personal data.  

Part one of this series looked at data protection regulations and best practices in the EU, UK, France, and Belgium. In part two, OneTrust DataGuidance consulted with experts in Ireland, Germany, the Netherlands, and Italy.  

The discipline of marketing and advertising has received quite some attention from data protection practitioners over the years. The marketing and advertising sector was critically examined for its use of profiling, followed by big data and, most recently, real-time bidding. Despite all this, many may wonder why companies, and, in particular, their marketing departments, are still keen on engaging in this highly controversial activity of personalizing advertising messages.

In this Insight article, Dr. Sachiko Scheuing, from Acxiom, examines recent regulatory developments affecting personalized advertising and how organizations can ensure they are compliant.

The EU Artificial Intelligence Act (the AI Act) and the Regulation (EU) 2023/1230 on machinery (the Machinery Regulation) and repealing Directive 2006/42/EC (the Machinery Directive) are closely intertwined. In this Insight article, Rosa Barcelo and Matúš Huba, from McDermott Will & Emery, analyze the purposes, interplay, and scope of the AI Act, the Machinery Directive, and its successor, the Machinery Regulation.

The EU Artificial Intelligence Act (the AI Act) is set to become a landmark regulation governing artificial intelligence (AI), introducing requirements and responsibilities for various actors in the AI value chain, including providers and deployers.

In part one of this Insight series, Katie Hewson and Eva Lu, from Stephenson Harwood LLP, discussed the definitions of providers and deployers under the AI Act and how these roles are allocated. In part two, they focus on the differences in obligations and risk exposure between the two, as well as steps organizations can take to mitigate those risks.