Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Australia: The proposed AI guardrails - strengthening Australia's approach to AI development and regulation

On September 5, 2024, the Department of Industry, Science and Resources (DISR) published a paper on Safe and responsible AI in Australia: Proposals paper for introducing mandatory guardrails for AI in high-risk settings (the Proposed Guardrails). The DSIR also announced a public consultation on the paper to receive feedback which will be used to guide Australia's approach to artificial intelligence (AI) regulation.

OneTrust DataGuidance provides an overview of the proposed guardrails and how they compare to other emerging AI regulations, with expert comments provided by Alec Christie, from Clyde & Co LLP.

Anastasia Sharipova/iStock via Getty Images

Australia's AI initiatives

The proposed guardrails are part of Australia's wider coordinated approach to AI and align with other initiatives, including the voluntary guardrails under the Voluntary AI Safety Standard (the Voluntary Guardrails).

The paper highlights the Australian Government's overall approach to ensure the safe and responsible adoption of AI defined by five pillars:

  • Delivering regulatory clarity and certainty - The Government will strengthen and clarify regulations applicable to AI, including reviews of areas such as healthcare and copyright law.
  • Supporting and promoting best-practice governance - The Voluntary AI Safety Standard will ensure responsible innovation and development across organizations.
  • Supporting AI capability - The Government is committed to supporting and furthering AI capabilities, such as through programs and investments like the AI Adopt Program and the National Reconstruction Fund.
  • Government as an exemplar - In 2023, the Australian Government established the AI in Government Taskforce which was concluded in June 2024. The Digital Transformation Agency continues AI work to ensure the Government is an example of responsible AI use and released its policy on this in August 2024.
  • International engagement - Engaging in international conversations to share best practices and effective governance structures, including through the international AI Safety research network.

Comparing other approaches around the globe

Alec describes Australia's approach as "a more targeted 'light touch' regulatory response," stating that "the Australian Government has chosen what it believes is a more suitable approach for Australia. That is, setting key principles or guardrails that businesses are to follow to work with existing non‑AI regulations and, where necessary, pass specific AI regulations or amendments to existing laws to meet any specific AI challenges. The guardrails approach will provide businesses with clarity, comfort, and guidance as to what and where they should be looking at from a general or guiding principles point of view (not unlike the Privacy Principles in the Privacy Act) and enable the Government, either economy-wide or per sector or activity, to provide additional guidance (i.e., mandatory requirements) where necessary."

When comparing heavy and light regulatory approaches, Alec describes how "neither of these approaches is wrong and the approach very much depends on the specific circumstances in each country/jurisdiction. However, what must be coordinated on a global level is that (like privacy for most of the world) the key principles (or guardrails) underpinning AI development, deployment, and use must be consistent (whichever regulatory approach is taken to implement them). Otherwise, there will be yet further impediments and hurdles for those trying to carry on business globally by having to apply a myriad of different AI 'rules' for each country/jurisdiction they do business in."

Risk-based approach

The Proposed Guardrails describe how AI's characteristics set it apart from other technologies and require special attention, such as autonomy, adaptability and learning, opacity and lack of explainability, and the high degree of realism. The advanced nature of AI coupled with its rapid development creates new risks and can amplify existing risks, such as bias.

The Proposed Guardrails take a risk-based approach, aligning with cybersecurity practices, considering the levels and characteristics of risks, and balancing preventative and remedial measures to target such risks. This precautionary approach ensures there are measures in place, like risk management assessments and frameworks, to avoid 'waiting for post-market liability measures and lengthy, often costly, litigation to shift industry practice.' This approach places clear obligations on AI developers and deployers throughout the whole AI lifecycle, aiming to build trust and confidence in AI technology across society.

Alec comments that "the voluntary guardrails will assist businesses developing, training, deploying, and using AI to navigate the risks and assist them to manage those risks in their business. However, longer term, we expect that the voluntary guardrails or key aspects of them will become mandatory in relation to either key areas/activities, sectors, uses, or the like. The current proposal for discussion for mandatory guardrails for high-risk AI activities is an example of this. While still very much at the proposal/discussion stage, the mandatory guardrails clearly identify that there will be activities, areas, focuses, sectors, or particular activities within sectors which will warrant (and for which the Government will mandate) mandatory guardrails (i.e., requirements).

Given the lack of AI-focused regulation to date, the guardrails (and this approach), both voluntary and mandatory, are a significant step forward in AI regulation in Australia and, for the Australian circumstance at least, appear to be a sensible approach to AI regulation. Likely by mid-2025, once the Government has considered all submissions in respect of the mandatory guardrails, it is likely we will see legislation mandating the guardrails (as amended based on feedback and the submissions) for the, hopefully then, fully-defined 'high-risk AI' activities.

Once the mandatory guardrails for high-risk AI have been introduced, we expect that the Government will then focus on other activities, areas, sectors, or parts of sectors where the mandatory guardrails (or at least some of them) will be extended to.

Another good feature of the current approach is that, apart from one 'tweaked' guardrail out of 10, the voluntary and proposed mandatory guardrails are very similar. While the proposed mandatory guardrails may change based on submissions and discussion of the Government's proposal for mandatory guardrails, it is a welcome change that there is consistency in approach and content between the voluntary and mandatory guardrails so that those businesses that apply the voluntary guardrails will actually be in a good position to apply (and not disadvantaged when later applying the mandatory guardrails if (and probably once) they ultimately apply to them."

The Proposed Guardrails

The 10 Proposed Guardrails would apply to the use of AI in high-risk settings and general-purpose AI (GPAI) models, encouraging the safe and responsible use, development, and deployment of AI.

1. Establish, implement, and publish an accountability process including governance, internal capability, and a strategy for regulatory compliance

The first Proposed Guardrail requires organizations to have a clear accountability process in place which includes governance policies, staff roles and responsibilities, and staff training plans to ensure and demonstrate compliance. Accountability processes should be accessible to the public to ensure public trust in AI products and services. This aligns with requirements under the Canadian Artificial Intelligence and Data Act (AIDA) and the EU Artificial Intelligence Act (the EU AI Act).

2. Establish and implement a risk management process to identify and mitigate risks

Organizations must ensure they have effective risk management processes in place due to the nature of high-risk AI systems. These processes should include steps to identify risks and their potential impact, determine suitable mitigation measures, and implement mechanisms to monitor the effectiveness of such measures.

The risk management strategies and risk mitigation methods must also be appropriate for the particular AI system, and the paper highlights that organizations can use standards such as ISO/IEC 42001 as a guide. Consideration should be taken for those involved, particularly employees, in the risk management process to identify high-priority areas. Ongoing evaluations and documentation support this guardrail.

3. Protect AI systems and implement data governance measures to manage data quality and provenance

Data quality directly impacts AI models and their output as using datasets containing biases to train models can reinforce such biases and cause harm to natural persons. Particular care should be taken to ensure AI models are not trained using harmful and illegal data. Therefore, organizations must implement effective data governance, privacy, and cybersecurity practices to avoid any discrimination or harm to people. Organizations must also disclose the sources of training data.

The paper highlights that the Attorney-General Department is working with the Copyright and AI Reference Group to ensure organizations comply with copyright laws and other relevant privacy laws, mirroring provisions under the EU AI Act. These laws include the Privacy Act of 1988 and the Copyright Act 1968.

4. Test AI models and systems to evaluate model performance and monitor the system once deployed

AI models must be tested before being placed on the market to ensure AI models meet their particular performance metrics and any associated risks with the model are managed appropriately. Such metrics will vary depending on the intended use of the AI system and the paper specifies that 'developers of GPAI models must conduct adversarial testing for any emergent or potentially dangerous capabilities.'

After being placed on the market, models should be continuously monitored and evaluated to ensure they function as planned and that no unexpected consequences or risks arise. This aligns with the National Institute of Standards and Technology (NIST) approach which is developing guidelines to assess and audit AI capabilities under the US Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

5. Enable human control or intervention in an AI system to achieve meaningful human oversight

Although AI is intended to automate activities without human intervention, human oversight is necessary to avoid any risk of harm from AI models and systems. Organizations must provide those responsible for AI models with the necessary training and resources to access and understand the AI model, its outputs, core capabilities, and limitations, as well as how to mitigate any associated risks.

6. Inform end users regarding AI-enabled decisions, interactions with AI, and AI-generated content

Under this Proposed Guardrail, organizations are required to:

  • inform people when AI is used to make or inform decisions relevant to them - this is consistent with the EU AI Act and the proposed reforms to the Privacy Act 1988;
  • inform people when they are interacting with an AI system directly; and
  • implement practices to ensure AI-generated outputs can be detected as artificially generated or manipulated - similar provisions have been introduced by the EU AI Act and Canada's AIDA. Methods to identify AI-generated outputs include content labeling and watermarking, with other methods still being developed. Organizations should determine the appropriate methods according to the assessments of their AI models and systems.  

Any information provided to end users must be clear, accessible, and relevant in order to comply with transparency obligations and reinforce public confidence. End users must also be made aware of processes to seek redress, discussed further in Proposed Guardrail 7 below.

The paper also notes that the 'implementation of this guardrail will evolve as AI detection and labeling techniques become more advanced.'

7. Establish processes for people impacted by AI systems to challenge use or outcomes

This Proposed Guardrail is closely linked to Proposed Guardrails 5 and 6 to ensure those affected by AI systems are supported. Organizations must establish processes allowing people to contest decisions made using AI, submit complaints, and receive information regarding the use and outcomes of AI systems. These processes will help ensure public trust in the organization and its practices, particularly concerning avoiding bias as protected under other Australian laws.

8. Be transparent with other organizations across the AI supply chain about data, models, and systems to help them effectively address risks

Transparency across the entire AI supply chain is key to ensuring all actors comply with accountability obligations, adhere to all guardrails, and identify and effectively mitigate any risks. This Proposed Guardrail aligns with the EU AI Act and Canada's AIDA ensuring those involved in developing, deploying, and managing AI models share responsibility and make improvements where necessary.

The paper gives the example that developers should provide deployers with information including the model's characteristics, training data sources, capabilities, and limitations so that developers can respond to risks that may emerge once the model is placed on the market. In turn, deployers must communicate incidents or failures to deployers so they can work on improving the model.

9. Keep and maintain records to allow third parties to assess compliance with guardrails

Organizations must keep and maintain documentation about high-risk AI systems and GPAI systems across their whole lifecycles. These records should be made available to third parties, such as regulators, which demonstrate compliance with the guardrails and other relevant legislation. The paper details that records should include descriptions of the AI system, design specifications, descriptions of datasets, and risk management processes, among others.

The paper highlights that organizations 'training large state-of-the-art GPAI models with potentially dangerous emergent capabilities must disclose these 'training runs' to the Australian Government,' a provision that aligns with the US Executive Order. Recognizing that maintaining documentation is often a major burden on organizations, particularly on small and medium-sized enterprises (SMEs), the consultation requests feedback and ideas on how to reduce this burden.

10. Undertake conformity assessments to demonstrate and certify compliance with the guardrails

Organizations must comply with accountability and quality assurance provisions and conduct conformity assessments before placing high-risk AI systems on the market, when systems are retrained or changes are applied to systems that significantly affect compliance with guardrails, and periodically to ensure continued compliance. These assessments may be conducted by the deployers, third parties, or by government entities and regulators. Organizations will receive certification once they have demonstrated compliance and this must be communicated to the public to show adherence to the guardrails.

The paper specifies that due to the availability of organizations that have the appropriate expertise to conduct such assessments, the Government may consider other options in the short and long term.

Mandating the Proposed Guardrails

To mandate these Proposed Guardrails, the paper proposes three options for the Australian Government to consider:

  • A domain-specific approach - Adapt existing regulations to include the guardrails. This approach would also allow sectors and their relative regulatory frameworks to address the harms relevant to them and would enable an 'incremental approach to new regulation, learning as regulation is developed.' However, the paper explains that this could create inconsistencies as to how AI is regulated across different sectors, lead to AI regulation not being prioritized over other issues, and slow down the process of 'achieving regulatory coverage across the economy.' It could also bring the burden of requiring several consultations across many legislative frameworks at the same time. The Government would therefore need strong coordination and set clear expectations throughout this process.
  • A framework approach - Introduce framework legislation that would require amendments to existing laws. As described by the paper, while this would provide a more consistent approach, it may still further gaps and inconsistencies as AI regulation would be limited to what is currently under the scope of existing laws and regulations.
  • A whole economy approach - Introduce a new AI act. Like the EU and Canada, this approach would mean introducing new legislation that includes the guardrails and an 'enforcement regime overseen by an independent AI regulator.' This would ensure regulatory efficiency and align with international regulations but would require strong coordination to avoid duplicated regulations.

Enhancing regulation across sectors and industries

Alec notes that "while there has been a growing (but limited) use of AI frameworks and policies to assess and manage the risks with AI development, deployment, and usage by Australian businesses, there has been little (if any) guidance from Government as regards AI regulation, creating somewhat of a regulatory vacuum with respect to AI in Australia. The collective guardrails (voluntary and ultimately mandatory) will greatly assist businesses in understanding what they should be thinking about, what steps they should be taking, and how they should be seeking to manage AI risks across their business with high-level core principles that are adaptable to their circumstances. As noted, while many larger businesses are already taking these matters into their own hands and implementing AI frameworks and policies, much of Australian businesses were waiting for the Government to act in this space before embarking on any significant AI risk management framework, policy, or controls.

As noted, once the mandatory guardrails are legislated for high-risk AI, we expect that the Government will then focus on other sectors, activities, and specific or types of organizations where the voluntary or mandatory guardrails (currently nine out of ten of them being very similar) will be extended as mandatory for those areas, activities, sectors, etc. Given the Government's desired aim of a cyber-secure Australia by 2030 and its current focus on AI risks, we do not see the mandatory guardrails concept being limited to only high-risk AI but will be extended into other areas, activities, sectors, and the like as required to ensure a base level of AI risk management economy-wide."

Staying ahead

To stay ahead of any regulatory changes, Alec explains that "all businesses developing, deploying, or using AI should be implementing from now the voluntary guardrails. While there is some scope in how to apply them and what they should cover, as high-level overarching 'guardrails' or principles they can be readily adapted to each business's activity with respect to AI and also the sector it is in and the business activities it undertakes with AI. The voluntary guardrails should be considered and applied in the specific circumstances of each business and its relevant AI activity (e.g., developing, deploying, or using), and how each guardrail is applied should be set out in a relevant AI framework or policy.

The AI framework or policy should incorporate how the business will apply each of the voluntary guardrails, any pre-approved uses of AI, pre-determined areas where the business will not use AI, the process to assess the risks, and the ultimate 'sign-off' on them together with 'rules of the road' for implementation of AI projects within the business, including ongoing assurance (or audit) of the AI to ensure it meets with any required conditions, the guardrails, as well as its performance with appropriate escalation and remediation processes.

These two activities will ensure that the business is ready for future mandatory guardrails, either for high-risk AI activities or any other area to which mandatory guardrails are extended, and is, in the meantime, appropriately managing the risks of AI and, to the greatest extent possible, ensuring it either avoids or minimizes those risks and has a framework/plan in place to appropriately manage all AI risks."

The paper includes discussion questions throughout the topic areas and comments can be submitted here until October 4, 2024.

Isabelle Strong Editor
[email protected]

With comments provided by:
Alec Christie Partner
[email protected]
Clyde & Co LLP, Sydney