Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Colorado: Governor signs AI bill

On May 20, 2024, the Colorado legislature announced that Senate Bill 24-205 a bill for an act concerning consumer protections in interactions with artificial intelligence systems (the Act) was signed by the Colorado Governor on May 17, 2024. The Act will become enforced on February 1, 2026, and requires a developer or deployer of a 'high-risk artificial intelligence system' to use reasonable care to avoid algorithmic discrimination within the high-risk system. 

Scope of the bill and requirements

Beginning February 1, 2026, the Act will require a developer or deployer of a 'high-risk artificial intelligence system' to use reasonable care to avoid algorithmic discrimination within the high-risk system. For developers of artificial intelligence (AI) models, there is a requirement to maintain specific documentation for the general-purpose model, including a policy to comply with federal and state copyright laws and a detailed summary concerning the content used to train the general-purpose model. Developers must also create, implement, maintain, and make available documentation to deployers who intend to integrate the general-purpose model into the deployer's AI systems. The documentation must disclose a general statement regarding the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk AI system in addition to: 

  • high-level summaries of the type of data used to train the high-risk AI system;
  • known or reasonably foreseeable limitations of the high-risk AI system;
  • the purpose of the high-risk AI system;
  • intended benefits and uses of the high-risk AI system; and
  • all other information.

Developers of high-risk AI systems must also provide documentation describing how the system was evaluated for performance and mitigation before it was offered, sold, leased, given, or otherwise made available to the deployer. Additionally, the Act requires that deployers also be made aware of: 

  • data governance measures used to cover the training datasets;
  • intended outputs;
  • measures to mitigate known reasonably foreseeable risks of algorithmic discrimination;
  • how the system should be used, not used, and monitored when making consequential decisions; and
  • any information reasonably necessary to understand the outputs and monitor performance.

The bill also provides definitions including 'algorithmic discrimination,' 'AI system,' 'consequential decision,' and 'high-risk AI system.'

Enforcement

The AG has exclusive authority to enforce the Act and may implement additional rules as necessary, including:

  • documentation requirements for developers;

  • content requirements for notices and disclosures;
  • content requirements for risk management policies; 
  • content requirements for impact assessments; 
  • requirements to prove that algorithmic discrimination was avoided; and
  • requirements for an affirmative defense. 

You can read the signed Act here and view the legislative history here.