Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

International: Council of Europe releases AI assessment methodology

On December 2, 2024, the Council of Europe Committee on Artificial Intelligence (CAI) released a methodology entitled 'Methodology for the risk and impact assessment of artificial intelligence systems from the point of view of human rights, democracy, and the rule of law (HUDERIA methodology).'

Scope of the methodology

The Methodology outlines that 'HUDERIA' is a risk and impact assessment of artificial intelligence (AI) systems from the point of view of human rights, democracy, and the rule of law. It can be used by both public and private actors to aid in identifying and addressing risks throughout the lifecycle of AI systems.

Moreover, the Methodology explains that the HUDERIA Model will provide supporting materials and resources, such as flexible tools, that can aid in implementing the Methodology.

Four main elements of the Methodology

The Methodology focuses on four elements, including:

  • the Context-Based Risk Analysis (COBRA) provides a structured approach to collecting and mapping the information needed to identify and understand the risks of the AI system, including initial determination as to whether the AI system is an appropriate solution;
  • the Stakeholder Engagement Process (SEP) proposes an approach to enabling and operationalizing engagement, as appropriate, with relevant stakeholders in order to gain information regarding potentially affected persons and contextualize and corroborate potential harms and mitigation measures;
  • the Risk and Impact Assessment (RIA) provides possible steps regarding the assessment of the risks and impacts related to human rights, democracy, and the rule of law; and
  • the Mitigation Plan (MP) provides possible steps for defining mitigation and remedial measures, including access to remedies and iterative reviews.

Interactive reviews

The Methodology also outlines that an interactive review of the risk and impact assessment ensures its effectiveness throughout the whole AI system lifecycle, and that:

  • choices and events that took place at any point during the lifecycle of the system and its deployment may require a review of prior decisions and assessments;
  • changes in social, regulatory, policy, or legal environments may have effects on how well the AI system works and on how it impacts the rights of affected persons or groups; and
  • several principles should be considered regarding the implementation of an iterative review, such as the establishment of a monitoring plan.

You can read the Methodology here.