Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

USA: White House publishes National Security Framework for AI

On October 24, 2024, the White House published the Framework to Advance AI Governance and Risk Management in National Security (the Framework) as part of the memorandum to fulfill the directive outlined in Executive Order 14110 of October 30, 2023, on Safe, Secure, and Trustworthy Development and Use of AI.

The Framework functions as a national security-focused counterpart to the Office of Management and Budget's (OMB) Memorandum M-24-10 Advancing Governance, Innovation, and Risk Management for Agency Use of AI. The Framework is targeted at the use of artificial intelligence (AI) in military operations, applying to both new and existing AI developed, used, or procured by or on behalf of the U.S. Government, and to system functionality that implements or is reliant on AI.

The Framework outlines prohibited use cases, including using AI with the intent or purpose to, among others:

  • profile, target, or track activities of individuals based solely on their exercise of rights protected under the Constitution and applicable U.S. domestic law, including freedom of expression, association, and assembly rights;
  • unlawfully disadvantage an individual based on their ethnicity, national origin, race, sex, gender, gender identity, sexual orientation, disability status, or religion;
  • infer or determine, relying solely on biometric data, a person's religious, ethnic, racial, sexual orientation, disability status, gender identity, or political identity; or
  • detect, measure, or infer an individual's emotional state from data acquired about that person, except for a lawful and justified reason such as for the purposes of supporting the health of consenting U.S. Government personnel.

The Framework also notes high-impact AI use cases, noting that an AI system is presumed to be high impact if the AI use controls or significantly influences, among other things:

  • tracking or identifying individuals in real-time, based solely on biometrics, for military or law enforcement action;
  • designing, developing, testing, managing, or decommissioning sensitive chemical or biological, radiological, or nuclear materials, devices, and/or systems that could be at risk of being unintentionally weaponizable; or
  • determining an individual's immigration classification, including related to refuge or asylum, or other entry or admission into the US.

Notably, the Framework details minimum risk management practices for high-impact and federal personnel-impacting AI uses, including:

  • completing an AI risk and impact assessment, with specific considerations;
  • testing an AI system sufficiently in a realistic context;
  • training and assessing the AI system's operators; and
  • ensuring appropriate human oversight of AI-based decisions.

The Framework further requires covered agencies to conduct an annual inventory of their high-impact AI use cases, basing steps to be taken along the broader principles of data management and transparency. This includes requirements relating to internal management structures, with covered agencies needing to establish an AI Governance Board chaired by the Chief AI Officer.

Finally, the Framework details that covered agencies must establish standardized training requirements and guidelines for the workforce on the responsible use and development of AI. Specific consideration and training are to be given to those interacting with AI regularly, and whistleblower programs updated to reflect procedures for AI.

You can read the Memorandum here and the Framework here.