Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

UK: Ofcom publishes letter to providers of generative AI and chatbots

On November 8, 2024, the Office of Communications (Ofcom) published an open letter to online service providers operating in the UK about how the Online Safety Act will apply to generative artificial intelligence (AI) and chatbots.

What is the scope of the Online Safety Act?

In the letter, Ofcom clarified that the Online Safety Act applies to generative AI chatbot tools and platforms including:

  • sites or apps that include a generative AI chatbot enabling users to share text, images, or videos generated by the chatbot with other users. This includes services with a 'group chat' functionality that enables multiple users to interact with a chatbot at the same time;
  • sites or apps that allow users to upload or create their own generative AI chatbots which are also made available to other users. This includes services that provide tools for users to create chatbots that mimic the personas of real and fictional people, which can be submitted to a chatbot library for others to interact with. Any text, images, or videos created by these 'user chatbots' is 'user-generated content' and is regulated by the Online Safet Act; 
  • any AI-generated text, audio, images, or videos that are shared by users on a user-to-user service and would be regulated in the same way as human-generated content. For example, deepfake fraud material is regulated no differently than human-generated fraud material; 
  • generative AI tools that enable the search of more than one website and/or database. This includes tools that modify, augment, or facilitate the delivery of search results on an existing search engine, or that provide 'live' internet results to users on a standalone platform; and
  • sites and apps that include generative AI tools that can generate pornographic material.

Ofcom advised organizations whose services are in the scope of the Online Safety Act to prepare to comply with the relevant duties. For providers of user-to-user services and search services, Ofcom explained that this includes:

  • undertaking risk assessments to understand the risk of users encountering harmful content;
  • implementing proportionate measures to mitigate and manage those risks; and
  • enabling users to easily report illegal posts and material that is harmful to children.

Ofcom clarified that the first duties will begin to take effect from December 2024, when Ofcom publishes its final Illegal Harms Risk Assessment Guidance and Codes of Practice.

Measures for organizations

Ofcom noted that many of the measures in its draft Codes of Practice will help user-to-user and search services meet their duties and protect their users from risks posed by generative AI. These include:

  • having a named person accountable for compliance with the Online Safety Act;
  • having a content moderation function that allows for the swift takedown of illegal posts where identified and for children to be protected from material that is harmful to them;
  • having a content moderation function that is adequately resourced and well-trained;
  • using highly effective age assurance to prevent children from encountering the most harmful types of content where this is allowed on the platform; and
  • having easy-to-access and easy-to-use reporting and complaints processes.

You can read the letter here.