Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Japan: AI Safety Institute publishes NIST AI RMF and AI Guidelines for Business Crosswalk

On September 27, 2024, the AI Safety Institute (AISI) published a crosswalk for the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) and Japan AI Guidelines for Business (AI GfB). The Crosswalk is the second published by the AISI and identifies differences in key concepts addressed by both the NIST AI RMF and Japan AI GfB.

Regarding adversarial attacks, the crosswalk notes that both the NIST AI RMF and Japan AI GfB consider adversarial attacks as a potential risk to AI system vulnerabilities. However, the NIST AI RMF emphasizes the importance of adversarial testing by red teaming as a suggested approach for risk management. Likewise, both the NIST AI RMF and Japan AI GfB suggest regular monitoring and mechanisms post-deployment, but the Japan AI GfB suggests incentives for reporting post-deployment issues.

On decommissioning AI systems, the NIST AI RMF focuses on the importance of diversity of AI actors in mapping, measuring, and managing AI risks, but the Japan AI GfB does not emphasize the importance of including interdisciplinary expertise from AI actors. Regarding pre-trained models, the NIST AI RMF emphasizes that they must be monitored for privacy and bias risks, among other things, alongside identifying and documenting internal risk controls for evaluating third-party technologies to manage the risks of pre-trained models. The Japan AI GfB expands on this security issue, recommending model reliability measures, traditional security measures, and detection prevention be performed on pre-trained models.

Both also recognize the risk of model drift, but only the NIST AI RMF notes how regular monitoring processes should be conducted to detect and respond to this drift.

You can read the press release, only available in Japanese here, the crosswalk here, the NIST AI RMF here, and the Japan AI GfB here.