Continue reading on DataGuidance with:
Free Member
Limited ArticlesCreate an account to continue accessing select articles, resources, and guidance notes.
Already have an account? Log in
California: The AI Transparency Act – what you need to know
On September 19, 2024, the California AI Transparency Act (the Act) was signed into law by the California Governor. The Act follows in the steps of other US states that have developed laws requiring transparency in the use of artificial intelligence (AI). The Act, however, is unique in that it has specific watermarking requirements. In this Insight article, OneTrust DataGuidance breaks down the key provisions of the Act and who it applies to, with comments provided by Jacob Canter, Counsel at Crowell & Moring LLP, and Lily Li, Founder of Metaverse Law Corporation.
Definitions
The Act provides definitions for key terms such as 'personal information,' 'personal provenance data,' and 'metadata.' Among the notable, 'artificial intelligence' is defined as 'an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.'
Under the Act, 'generative artificial intelligence system' is defined as 'an artificial intelligence that can generate derived synthetic content, including text, images, video, and audio, that emulates the structure and characteristics of the system’s training data.'
Scope
The Act applies to covered providers, who must comply with the Act from January 1, 2026, when it becomes operative.
The Act defines 'covered provider' as 'a person that creates, codes, or otherwise produces a generative artificial intelligence system that has over 1,000,000 monthly visitors or users and is publicly accessible within the geographic boundaries of the state.'
Regarding the 1 million monthly visitors or users, Jacob notes "This is a bit ambiguous because it does not explain how to calculate 'over 1,000,000 visitors or users.' Is this based on an average number of visitors or users from the prior year? Does your obligation to comply change every month depending on how many users you had in the prior month? Until that ambiguity is clarified, the safer approach may be to prepare for compliance even if your company does not consistently have over 1 million visitors."
Lily adds that "according to Governor Newsom, California is 'home to 32 of the world’s 50 leading AI companies,' many of which will be required to comply with this Act due to the nature of their AI systems and number of monthly users."
Jacob furthers that "Most of the generative AI laws in the U.S. have been subject-matter specific. Some states have either enacted or passed laws related to transparency and fairness in elections (for example, PA, MA, NC, WA, and CA). Many states have passed laws that seek to limit the dissemination of deepfakes (for example, TX, FL, IL, NY, and CA). And Colorado and New York City have passed laws that seek to limit discriminatory uses of generative AI (for example, CO and NY). In contrast, the AI Transparency Act is general. It covers all generative AI content that a covered company’s product generates. On these terms, the Act is actually quite broad."
Obligations
Regarding the implications of the Act on businesses, Jacob explains that "California's AI Transparency Act will have a direct impact on businesses that develop generative AI systems and have over 1 million monthly visitors or users. These businesses must comply with the law's requirements: to create an 'AI detection tool,' to embed 'latent-disclosure' data into their AI-generated content, and to make 'manifest disclosures' available for the content as well."
The Act requires covered providers to provide an AI detection tool to users at no extra cost, that:
- allows users to assess whether an image, video, or audio content has been created or changed by the covered provider's generative AI tool;
- outputs any system provenance data detected in the content;
- does not output any personal provenance data detected in the content;
- subject to certain exceptions, is publicly accessible;
- allows users to upload content or provide a URL for online content; and
- supports an application programming interface that allows users to use the tool without visiting the covered provider's website.
Under the Act, covered providers should also collect user feedback on the AI detection tool and incorporate this feedback to improve the tool's efficacy.
In addition, covered providers should not:
- collect or retain personal information from users of the AI detection tool, except where exceptions apply;
- retain content provided to the AI detection tool for longer than necessary to comply with the Act; and/or
- retain personal provenance data from content submitted to the AI detection tool.
Lily adds that "While other AI laws in the US are focused on risk assessment, notice, and disclosure obligations, this is the first major AI law that imposes product requirements on AI developers. Now, AI developers need to code in a digital watermark on generative AI content and provide the tools to detect this watermark. This is different from written disclosures on a browser or app, which can easily get lost or obscured when generative AI content is copied or embedded downstream."
Covered providers should offer users to option to include a manifest disclosure in image, video, or audio content that has been created or altered by the covered provider's generative AI system that:
- identifies the content as being generated by AI;
- is clear, conspicuous, and appropriate for the content, as well as understandable to a reasonable person; and
- is permanent or difficult to remove.
Covered providers should also include a latent disclosure in AI-generated image, video, or audio content generated by AI system that:
- communicates the name of the covered provider, the name and version number of the generative AI system used, the time and date the content was created or altered, and a unique identifier – to the extent technically feasible and reasonable, the disclosure should be direct or through a link to a permanent internet website;
- is detectable by the covered provider's AI detection tool;
- is consistent with industry standards; and
- is permanent or extraordinarily difficult to remove.
Lily explains that "Additionally, the Act includes a requirement that these covered providers enter contracts with their licensees that contain specific provisions. (22757.3(c).) This means that businesses that incorporate AI or are considering implementing AI systems from covered providers may want to ensure the appropriate contracts are in place."
If covered providers license their generative AI systems to third parties, they must ensure that licensees maintain these disclosure requirements. If covered providers know that a third-party licensee is no longer capable of including such disclosures, they will be required to revoke their license within 96 hours of discovering this fact. Following the revocation of the license, the third party must cease using a licensed generative AI system.
Enforcement
The Act will be enforced by the Attorney General, a city attorney, or a county counsel and provides that violations of the Act are liable for civil penalties.
Lily notes that "Under this Act, fines can add up quickly: A covered provider found in violation of this Act will be liable for $5,000 per violation – and each day the provider is in violation of the Act counts as a new violation. (22757.4(a-b).) For those who contract with covered providers, a violation may result in an injunction along with reasonable attorney’s fees and costs (22757.4(c).)."
Next steps
Jacob states that "Indirectly, the Act may create opportunities. Technical know-how is required to develop the AI detection tools, and both the latent and manifest disclosures. As often happens, companies can use this change in policy as an opportunity to build a product that facilitates compliance."
Lily adds that "This Act goes into effect on January 1, 2026, but covered providers should act now given the significant technology requirements of the Act. Covered providers need to :
- make an AI detection tool;
- include both an optional and a latent disclosure in all AI generated content; and
- enter contracts with licensees to ensure such latent disclosures."
Victoria Prescott Team Lead – Editorial
[email protected]
With comments provided by:
Jacob Canter Counsel
[email protected]
Crowell & Moring LLP, San Francisco
Lily Li Founder
[email protected]
Metaverse Law Corporation, Newport Beach