Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

China: Overview of AI Safety Governance Framework v1.0

The AI Safety Governance Framework v1.0 (the Framework) was published by the National Information Security Standardization Technical Committee (TC260) on September 9, 2024. It is neither a mandatory law or regulation, nor a national standard for AI, but more a general risk assessment and a guidance for different roles when facing the problem of artificial intelligence (AI) safety. Although not a law, it has contributed some useful and helpful opinions on AI safety governance, especially the risk assessment it proposed for safety risks. Dehao Zhang, Counsel at Fieldfisher, provides an overview of the Framework and how it may impact future AI legislation in China.

The Image Bank/George Hammerstein via Getty Images

The Framework has clarified the principle of AI safety governance

The Framework has clarified that the development of AI should not be prohibited. Instead, AI innovation is encouraged, but it should be based on the principle of safety, which can be considered to be the first and most important principle of AI safety governance. The Framework encourages the development of AI rather than blocking it and focuses on how to resolve different risks. When facing risks, organizations should take both technical and organizational measures, as well as clearly define the stakeholders' responsibilities, especially algorithm developers, products/services providers, and users of the products and services.

The Framework has also clarified that China would like to cooperate with different countries and regions on AI safety governance and to contribute and join the global AI safety governance system.

The Framework has explained the main point of AI safety governance

The Framework's proposal on AI safety governance is based on the risk management approach. The EU's Artificial Intelligence Act (the EU AI Act) has also taken this approach, although it seems to be more mature, especially as it has classified different levels of risks in order to identify different AI risks in its framework.

Based on the risk management approach, the Framework has explained it will conduct risk assessments to first analyze the risk of AI safety in different industries and areas. Secondly, regarding the model algorithms, training data, computing power facilities, different products/services, and application scenarios, the Framework will propose different technical measures to improve the safety, fairness, reliability, and robustness of AI products and applications. It also encourages all of the stakeholders to take responsibility for AI safety governance. For example, technology research institutions, product and service providers, users, government agencies, industry associations, and social organizations should take measures to identify and prevent AI safety risks.

Considering the EU AI Act's drafting, legislators should also consider the obligations of the government as well as the AI developer, AI products/service providers, users, and distributors.

The classification of AI safety risks

The Framework has classified AI safety risks into two different categories: the AI's inherent safety risks and safety risks in AI applications.

Inherent safety risks

Regarding the AI's inherent safety risks, it focuses on the safety risks that AI may have when designed, such as the black box, bias and discrimination, robustness, unreliable output, as well as general risks like stealing, tampering, and adversarial attacks since AI systems or AI technology will also become property (such as intellectual property) and tools for both user and attacker.

In addition, AI also creates several risk concerns from a data security and cybersecurity perspective. Considering that it is necessary to feed data into AI to train it, some concerns will arise around data collection/processing and the lawfulness of training on data. Moreover, if the data is not accurate, the AI model or algorithms may become inaccurate or incorrect due to wrongful training. Data breaches can be another problem when an AI model or algorithm is developing. Some processing may involve communications or access to data, and data breaches may occur in such processes. Cybersecurity risks will be reflected as system or infrastructure risks, however, the Framework has also indicated that supply risks should also be considered, for example, if certain countries conduct export control on chips, this can lead to risks when creating infrastructures for developing AI technology.

AI application safety risks

Regarding the AI application safety risks, they can be divided into several aspects:

Firstly, from the network/cyber risk perspective, when users use the AI application, they may be concerned about the information/content the AI application creates. The risks of illegal content or misleading content. AI applications may also create data security and cybersecurity concerns, such as unauthorized access, data breaches, and cyber-attacks. In addition, re-engineering or fine-tuning based on foundation models is commonly carried out in AI applications. If security flaws occur in foundation models, it will lead to risk transmission to downstream models.

Secondly, the Framework considered that real-world risk will also be a concern. How can AI applications have an impact on the real world, and whether AI applications will be used in a safe way, for example, will autonomous driving affect traffic safety? There will also be other questions, such as whether AI will be used for crime. Will AI be used for the military installations? Will AI be used to manufacture nuclear weapons or other technology abuse situations?

Thirdly, cognitive risks are another concern. AI applications may further create information cocoons so that the users only receive the information they are used to receiving or prefer to receive. AI applications may also mislead some users when AI creates non-accurate information or wrong information, which may cause misunderstandings and appear misleading.

Lastly, ethical risks are a problem that also relates to personal data processing. Discrimination and bias may occur with personal labels or categories, which may create different treatment. In addition, the Framework also imagines that if the AI generates self-awareness, competing with humans for resources will cause a serious risk to human society.

Some risks may be very far from us at this stage, but some of them are daily concerns. Currently, there is a focus on data protection compliance problems, cyberspace content health (moral law) problems, and the quality problem of AI products.

Comprehensive governance measures

The technical measures are general and wide, but the Framework has also proposed comprehensive governance measures, which seem to be a summary of the potential legal requirements in the future:

To implement a tiered and category-based management for AI application

The Framework has suggested the classification and grading of AI systems based on their features, functions, and application scenarios and set up a testing and assessment system based on AI risk levels. It has also suggested registered/filling work for certain AI systems, which may apply to several high-risk scenarios or industries such as finance, education, and healthcare. Currently, the legal requirements may only allow the AI systems that have social public opinion characteristics to be registered at the Cyberspace Administration of China (CAC) or local CAC.

To develop a traceability management system for AI services

It is recommended to label the output, whether it is AI output, formulate and introduce standards and regulations on AI output, and clearly indicate the explicit and implicit labels.

To improve AI data security and personal information protection regulations

This may not be a very good practice, as from most of the countries' legislation, it is not a common practice to specifically have a regulation on AI data security or AI personal data protection. Authorities can publish guidance to clarify and explain how the Data Security Law (DSL) or the Personal Information Protection Law (PIPL) can apply to the AI rather than to make lots of legislation on this.

To create a responsible AI R&D and application system

This suggestion is for the purpose of improving the quality of AI output. On the one hand, it should be trained to be 'good' from the human side, to create or design more AI systems or AI applications in line with human morality and values. On the other hand, training data and materials should be of high quality to feed the AI systems. AI-related ethical review standards, norms, and guidelines should be established to improve the ethical review system, which may solve some of the problems of ethics or morality.

To strengthen AI supply chain security

As the Framework suggested, knowledge sharing should be promoted in AI, AI technologies should be made available to the public under open-source terms, and AI chips, frameworks, and software should be jointly developed. However, the risk is not about sharing knowledge but about overseas suppliers. There could be a problem or risk from the AI supply chain, and today, technical localization or AI localization may be another risk.

To advance research on AI explainability

The Framework pointed out the transparency, trustworthiness, and error-correction mechanism, which is also a potential requirement of AI law in the future, especially if AI is a product or service.

To share information and the emergency response to AI safety risks and threats

It should be an obligation for the government to build up a mechanism for sharing recent incidents or bugs and to remind organizations of the threat of risks. Organizations should also think about cybersecurity laws, data security laws, and personal information protection laws in which the notification obligation has been required to report the cybersecurity or data incident. However, can they cover an AI security incident? If it cannot, a new incident notification mechanism should be established through laws.

To enhance the training of AI safety talents, to establish and improve the mechanisms for AI safety education, industry self-regulation, and social supervision, and to promote international exchange and cooperation on AI safety governance

The government or authorities can take measures to encourage companies and industries to join in and improve mechanisms for AI safety education, industry self-regulation, and social supervision. This will be one of the main points for developing AI safety governance, and AI safety talents and international cooperation can be really helpful in this aspect.

The Framework also introduced useful guidance on AI applications, including the safety guidelines for model algorithm developers, the safety guidelines for AI service providers, and the safety guidelines for users in key areas, which aim to provide best practices. However, they are not mandatory laws but just helpful recommendations.

Although it is not a law, the Framework reflects that China is seeking solutions for AI safety to some extent, and it will also be considered by legislators when drafting and amending the AI law of China.

Dehao Zhang Counsel
[email protected]
Fieldfisher, Beijing