Continue reading on DataGuidance with:
Free Member
Limited ArticlesCreate an account to continue accessing select articles, resources, and guidance notes.
Already have an account? Log in
International: Regulating AI in Canada and the EU - AIDA versus the AI Act
Artificial intelligence (AI) is transforming the way we work, learn, and communicate. The rapid development and adoption of new AI-based technologies have prompted regulators around the world to create policies and regulations governing its use, in an effort to ensure that AI is used in a responsible and ethical manner. Canada and the EU are among the many jurisdictions that have recently recognized the need for AI-specific regulation.
In April 2021, the European Commission published its proposed Artificial Intelligence Act (AI Act) as a framework for a coordinated European approach to addressing the challenges and concerns raised by the increasing use of AI. The following year, in June 2022, the Canadian government introduced Bill C-27 for the Digital Charter Implementation Act 2022 (Bill C-27), which aims to update existing federal private-sector privacy laws. In addition to privacy law reform, Bill C-27 also includes the Artificial Intelligence and Data Act (AIDA), Canada's first attempt to regulate AI through standalone legislation.
Both AIDA and the AI Act seek to encourage the responsible development and use of AI systems through a single regulatory framework. In this Insight article, Heather Whiteside, from Fasken, examines the similarities and differences between these legislative proposals, as currently drafted, in Canada and the EU.
Purpose of regulating AI
AIDA has two stated purposes. The first is to regulate interprovincial and international trade and commerce in AI systems by establishing common requirements for the design, development, and use of those systems. The second purpose is to prohibit certain conduct in relation to AI that may result in serious harm to individuals or harm to their interests.
Similarly, the objectives of the AI Act include ensuring that AI systems are safe and respect existing laws on fundamental rights and EU values, as well as establishing legal certainty to facilitate investment and innovation in AI.
Compared to AIDA, the AI Act aims to address and prevent a broader range of societal harms through the regulation of AI systems. While AIDA is focused specifically on preventing harm to individuals caused by AI systems, such as physical or psychological harm, property damage, and individual economic loss, the AI Act considers broader categories of harms that may be caused by AI systems. Notably, damage to the environment and disruption of the management and operation of critical infrastructure are examples of serious incidents caused by AI systems that must be reported to the relevant authorities under the AI Act.
Definition of AI
Both acts define 'artificial intelligence systems' in a flexible manner to account for the wide variety of applications of AI, both now and in the future.
The AI Act defines an AI system as software developed with one or more techniques, namely machine learning approaches, logic and knowledge-based approaches, or statistical approaches. Such a system can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.
The definition in AIDA also refers to specific techniques used in the system, including a genetic algorithm, a neural network, or machine learning. However, only systems that are autonomous or partly autonomous meet the definition of an AI system under AIDA. Autonomous ability is not expressly addressed in the AI Act's definition.
Canada's Minister of Innovation, Science, and Industry has recently published planned amendments to AIDA. One planned amendment is to replace the definition of 'artificial intelligence system' with that used by the Organisation for Economic Co-operation and Development (OECD), therefore making AIDA more consistent with existing international standards. It is quite possible that the definition used in the AI Act will also be modified during trilogue negotiations between representatives of the European Parliament, the Council of the European Union, and the European Commission, aligning the definitions under both more closely than currently proposed.
Categorizing high-risk AI systems
Both Canada and the EU are primarily focused on regulating high-impact (in the case of AIDA) or high-risk (in the case of the AI Act) AI systems, as opposed to general-purpose AI systems or systems that pose minimal risk of harming individuals.
The types of systems that are considered high-impact under AIDA will be established by future regulations. However, the planned amendments to AIDA have introduced an initial list of classes of high-impact systems. This parallels the approach taken in the EU.
Both the AI Act and AIDA (if the planned amendments are made) impose more rigorous requirements on systems that are used for the following high-risk purposes:
- employment-related determinations, such as the recruitment and selection of individuals or promotion and termination decisions;
- processing biometric data for the purposes of identifying individuals;
- the exercise and performance of certain law enforcement powers (this is broadly stated in the planned amendments to AIDA, whereas the AI Act sets out specific use cases, such as assessing an individual's risk of offending or reoffending and evaluating the reliability of evidence); and
- decision-making by courts and administrative bodies (this is broadly stated in the planned amendments to AIDA, whereas the AI Act specifies systems that are intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts).
In other cases, there is no overlap between the categorization of high-risk systems between AIDA and the AI Act. For example, only the AI Act expressly includes systems that are used in migration, asylum, and border control management in the high-risk category. Conversely, only AIDA expressly includes systems that are used for content moderation on online communication platforms, such as search engines and social media, in the high-impact category.
Requirements for high-risk AI systems
Although AIDA and the AI Act take similar approaches to categorize high-risk systems, the obligations imposed on individuals who design, develop, or otherwise make those systems available are not entirely the same.
Under AIDA, individuals responsible for high-impact AI systems must establish measures to identify, assess, and mitigate the risks of harm or biased output that could result from the use of the system, and monitor compliance with such mitigation measures. They must also publish a plain-language description of the AI system that explains key elements of the system, such as how it is intended to be used, the type of content it is intended to generate, and the mitigation measures that have been established to mitigate the risks of harm and biased output. These are relatively high-level obligations, in part because AIDA contemplates that further details will be set out in regulations made thereunder.
The AI Act imposes similar obligations with respect to risk management and transparency for users, but it also includes more rigorous data governance requirements. The AI Act sets out criteria for the use of training, validation, and testing datasets. These datasets must adhere to appropriate data governance and management practices, such as practices for data collection and processes for examining possible biases and identifying possible data gaps. Importantly, the datasets must meet a high standard of being relevant, representative, free of errors, and complete.
These obligations apply to all high-risk AI systems, but the AI Act does not stop there. Certain practices related to AI are entirely prohibited, such as the use of AI systems for social scoring or systems that are manipulative or exploitative. In other cases, the AI Act imposes detailed requirements on certain AI use cases, such as the use of biometric identification systems in public spaces for the purposes of law enforcement.
Oversight and enforcement
Both acts have strong oversight mechanisms. In the EU, the European Artificial Intelligence Board will assist the European Commission in providing guidance and overseeing the AI Act. EU Member States will each designate or establish a national supervisory authority, similar to the existing General Data Protection Regulation (GDPR). In Canada, the Artificial Intelligence and Data Commissioner will be responsible for assisting in the administration and enforcement of AIDA.
Penalties for engaging in prohibited practices or failing to meet the data governance requirements of the AI Act could include a fine of up to €30 million or 6% of total global annual turnover, if the offender is a company. Similarly, other violations of the AI Act are subject to penalties of up to €20 million or 4% of total global annual turnover.
Penalties under AIDA are only slightly less severe. Contraventions of AIDA's transparency requirements can result in fines up to the greater of CAD 10 million (approx. $7 million) and 3% of global revenues, or up to CAD 50,000 (approx. $36,500) in the case of an individual. Persons who commit more serious criminal offenses, such as using personal information for the purpose of creating an AI system if the personal information was not lawfully obtained, may be liable to a fine of up to the greater of CAD 25 million (approx. $18 million) or 5% of global revenues, or, in the case of an individual, a fine at the discretion of the court or imprisonment. AIDA also establishes an administrative monetary penalty regime for lesser violations of the act, the specifics of which will be set out in future regulations.
Looking forward
AIDA and the AI Act are currently making their way through their respective legislative decision-making processes in Canada and the EU. The AI Act is expected to be finalized first. The European Commission, Council, and Parliament are expected to continue trilogue negotiations for the remainder of this year, with the aim of adopting the final version of the AI Act early next year.
AIDA has passed a second reading in the House of Commons and is currently being considered before the Standing Committee on Industry and Technology. The act is likely to be subject to further amendments as it makes its way through Parliament and, if passed, it will not be until at least 2025 before regulations are developed and the law comes into force.
Both acts, if and once adopted, will provide for a transition period to allow organizations and governance bodies to prepare for a monumental shift in the regulation of AI.
Heather Whiteside Associate
[email protected]
Fasken, Canda