Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

UK: The Digital Services Act and its interplay with AI

With the full entry into force of the EU's Digital Services Act (DSA) in February 2024, coupled with the EU's efforts to regulate artificial intelligence (AI) (culminating in the recently passed EU AI Act), businesses have been grappling with the compliance challenges posed by both of these regimes, against the backdrop of the General Data Protection Regulation (GDPR) and the rapidity with which tech companies have been innovating and launching AI tools in the UK and EU.

The genesis of the DSA predates the tsunami-like wave of AI innovation and adoption but has very important consequences for those larger businesses already caught in the crosshairs of the DSA who are now also bringing powerful new AI tools to market. In this article, Geraldine Scali and Anna Blest, from Bryan Cave Leighton Paisner LLP, explore how users and deployers of AI will need to consider the impact of the DSA alongside existing compliance obligations in the GDPR, whilst readying themselves for the staged implementation of the EU's AI Act.

Klaus Vedfelt/DigitalVision via Getty Images

The DSA: Goal and scope

What is the DSA and what is it intended to do?

The EU's stated goal for the DSA is to ensure a safe, predictable, and trusted online environment and prevent the dissemination of illegal and harmful content online and the societal risks that disinformation may generate. Given the way such activities are magnified by the ability to operate across the large online platforms that now form a backdrop to all our lives, the EU has chosen to regulate how these activities reach an online audience. It therefore regulates online intermediaries and platforms (including hosting services, marketplaces, social networks, content-sharing platforms, app stores, and online travel and accommodation platforms) as they are essential components of the digital platforms used by consumers and businesses alike to buy and sell goods and services online, and to obtain and share information.

The DSA operates a tiered set of rules, which prescribe obligations based on the function, size, and impact of the online business in the digital ecosystem. Adopting a risk-based approach, those businesses deemed to pose particularly significant risks in relation to illegal content are subject to the broadest range of compliance obligations, with the European Commission granted the power to designate certain entities as Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs). Those intermediary service providers who merely transmit, host, or cache information are not liable for information so transmitted, hosted, or stored, provided that they act to remove or disable access to illegal content once aware of it.

Online platforms and online search engines with at least 45 million monthly active users in the EU (representing 10% of the EU's population) are categorized as VLOPs and VLOSEs respectively1.

The most far-reaching rules in the DSA apply to VLOSEs and VLOPs. These include:

  • requirements to identify and remove illegal content;
  • restrictions on the use of misleading user interfaces that hamper users from making free and informed decisions (for example, through the use of 'dark patterns' and 'nudging' practices that manipulate users' choices);
  • requirements to enhance the transparency of online advertising (including the provision of more information to users and the ability to opt out from recommendation systems based on profiling);
  • increased protection for children using these online services (including a ban on targeted advertising based on profiling); and
  • requirements to carry out annual risk assessments and report these to the Commission.

Where does the DSA apply, and which entities are within its scope?

All online intermediaries offering their services in the EU's single market, whether or not they are established in the EU, need to comply with the DSA. Micro and small companies will have obligations proportionate to their ability and size while ensuring they remain accountable. In addition, even if micro and small companies grow significantly, they would benefit from a targeted exemption from a set of obligations during a transitional 12-month period.

The DSA therefore applies to non-EU companies, as it applies to all online intermediaries 'offering' their services in the European single market, whether they are established in the EU or outside of it. Whether an online intermediary will be 'offering' its services in the EU will be determined by the extent of its connection to the EU, assessing the number of EU recipients, and whether it targets activities towards one or more EU Member States (with the latter concept being interpreted very broadly). This has the effect that as soon as a mobile app is available in an app store accessible from the EU, or a website is in a language of an EU country, EU recipients will be deemed to be targeted, with the consequence that the DSA will apply.

Those online service providers who offer web-based messaging services and email services are subject to the lowest tier of obligations (requiring them to review their T&Cs to ensure certain minimum clarity/transparency requirements are met and to include information about content moderation policies). Those in scope are also subject to an annual transparency and reporting regime (unless they are SMEs) in relation to their content moderation activity and designate a local representative in the EU (if they do not have an EU establishment).

Generative AI: Source of risks and content moderation solution

The requirement on VLOPs and VLOSEs to assess and mitigate a series of systemic risks specified in the DSA is also amplified by:

  • the mass use of generative AI on platforms operated by VLOPs and VLOSEs (with risks that could lead to more widespread access to illegal content); and
  • the growing use of AI in the service offerings of VLOPs and VLOSEs.

With the DSA requiring more active content moderation of the vast amount of user-generated content made available online and more transparency about how online service providers interact with users via their own interfaces and when operating advertising based on user information, AI tools would seem to offer both risks and opportunities for both users and online service providers.

Generative AI: A source of illegal and harmful content

Users are interacting with AI tools in all aspects of their lives, with the potential for the generation of larger volumes of online user-generated content. This poses the risk that this content may infringe third-party rights (e.g., copyright) depending on how the underlying model is trained, exhibit harmful bias, or be used to create harmful/illegal imagery/text on a mass scale (such as political disinformation or deepfakes). The use of AI also increases the ability for bad actors to misuse an online service through the submission of abusive notices or other methods for silencing speech or hampering competition. Whilst the means used to transmit the content remains the same, the ability for bad actors to generate more harmful content/disinformation more quickly and cheaply using AI will increase the burden of content moderation duties for VLOPs and VLOSEs, by increasing the volume of material which may need to be moderated, and determining whether content has been AI-generated is an inexact science.

Generative AI: a solution for content moderation?

The DSA was intended to be framed to be technology neutral, so it does not refer to AI at all, although it is apparent that AI usage will be caught e.g., where the DSA discusses the use of 'automated means' of content moderation. It is also likely that the distinction between standard large language models (LLMs) and search engines is being eroded (e.g., where a traditional search engine is overlaid with an AI tool), which could bring more new tech within the scope of the DSA. This development seems likely to invite the scrutiny of numerous EU regulatory bodies (the European Commission, as well as the EU AI Office, and national member state bodies tasked with enforcing the new AI rules).

To give an idea of the sheer scope of the content moderation undertaking, the EU's transparency database for the DSA shows how much online content has been subject to moderation (leading to removal) with over 11 billion statements of reasons submitted to the database in respect of content removed/disabled by online platform providers over the last six months. Given the volume, it is natural that automated means will need to be used, with over 50% of the statement of reasons being generated automatically. This potentially presents an opportunity to use AI tools to help a regulated provider meet its DSA obligations, provided that in doing so, it complies with the EU's upcoming new rules in the AI Act on the deployment of an AI tool and the DSA's transparency rules, which require users to be informed when automated tools are to be used for content moderation.

The content moderation duty in the DSA also comprises a core transparency element, with service providers required to give users information on any policies, procedures, measures, and tools used for the purpose of content moderation, including algorithmic decision-making and human review, as well as the rules of procedure of their internal complaint handling system. This includes, for providers of intermediary services, a requirement to give 'meaningful and comprehensible information about the content moderation engaged in at the providers' own initiative, including the use of automated tools, the measures taken to provide training and assistance to persons in charge of content moderation, the number and type of measures taken that affect the availability, visibility, and accessibility of information provided by the recipients of the service and the recipients' ability to provide information through the service, and other related restrictions of the service; the information reported shall be categorized by the type of illegal content or violation of the terms and conditions of the service provider, by the detection method and by the type of restriction applied.'

Where an intermediary uses automated methods for content moderation, users need to be given a description of the tool used, the purposes for which it will be used, together with indicators of its accuracy and its possible error rate.

There is also tension as a number of VLOPs and VLOSEs are in tandem developing and launching AI tools with high-impact capabilities as components of their regulated services. The DSA does not specifically address such uses, nor cases where original users' content is significantly modified by an integrated AI tool, which would more likely fall into the remit of the EU's AI Act.

Conclusion

The main and key challenge for online service providers is now to determine how to meet their transparency obligations under the DSA in tandem with applicable EU AI Act and GDPR compliance requirements.

To meet transparency requirements, some companies have already made revisions to their T&Cs to inform users about the use of automated tools (such as AI) in the context of algorithmic decision-making and complaints handling. This has resulted in public backlash in the EU at proposed changes to T&Cs which would permit user data to be included in AI training datasets. National regulators are investigating social media platforms using user data to train their LLMs and chatbots. This sets up an inevitable tension as those developing AI tools are often also those offering platform services of a sort regulated by the DSA, with access to large user datasets. This also poses a conundrum where providers are simultaneously seeking to use AI to better protect users from illegal content while wanting to use their users' content to train LLMs. The combined impact of the DSA and the EU AI Act appears at this stage to be acting as a brake to the use of EU citizens' data to train AI tools in development by US tech companies, with several tech companies pausing the use of EU data and facing complaints from consumer bodies about the use of user data in AI tools.

Platforms are also considering the thorny question of responsibility for AI-generated content. Who will be responsible in law for harmful/illegal AI-generated content? Is the user of the AI tool responsible for the output of the tool? In intellectual property law, at present, in most jurisdictions, AI tools cannot themselves be authors (for the purposes of claiming protection for the AI-generated work). What does this mean for online platforms making available AI tools as a component of a service? Some tech companies address this through their T&Cs to ensure responsibility for ensuring content quality rests with the user (who are put under obligations not to distribute AI-generated content to harm others or engage in illegal activity). Some are also using automated tools to detect whether general-purpose tools built on LLMs violate their user policies.

Online businesses facing this complex compliance burden will need to implement smart strategic processes, by engaging with internal teams across the compliance function as well as carefully managing communications with users (in public-facing terms of use). Data teams will need to work closely with those rolling out new technologies to understand if new impact assessments are required and any likely impacts on personal data use.

Geraldine Scali Partner
[email protected]
Anna Blest Senior Knowledge Lawyer
[email protected]
Bryan Cave Leighton Paisner LLP, London


1.The first VLOPs and VLOSEs were designated on April 25, 2023, with the Commission identifying 17 VLOPS and 2 VLOSEs that currently reach the relevant monthly user threshold and monitoring those service providers whose user base may be approaching the threshold.