Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Netherlands: Court of Audit publishes report on AI in central government

On October 16, 2024, the Netherlands Court of Audit published a report on AI in the Dutch central government.

The Court of Audit highlighted that it investigated 70 government organizations, which together said they were using or had used 433 artificial intelligence (AI) systems. However, 167 AI systems were still in experimental usage and 88% of organizations used no more than three AI systems. The policy and Employee Insurance Agency used the most AI systems, using 23 and 10 AI systems respectively.

In addition, the report found that only 5% of the 433 reported AI systems were entered in the algorithm register. Not all AI systems have to be entered into the register, but all systems considered high-risk AI systems under the EU AI Act must be entered into the algorithm register.

The report highlighted that most AI systems were used for improving internal purposes, including:

  • knowledge processing, such as document analysis;
  • inspection and enforcement, such as checking document compliance;
  • maintenance, including infrastructure checks;
  • investigation, including biometric identification; and
  • monitoring, such as suspicious behavior of computer systems.

Most AI systems reported were developed entirely in-house, and such organizations are subject to particular requirements as providers of AI systems under the EU AI Act. Although fewer AI systems were bought from third parties, the report underlined that many IT systems now contain a substantial AI element, such that organizations may procure AI without being aware.

However, the report determined that there is an incentive for organizations to classify their AI system as low risk, noting that of the AI systems reported, some would have been considered prohibited AI systems under the EU AI Act, despite none being reported as such.

You can read the press release here and download the report here.