Google DeepMind, Microsoft and xAI Sign Agreements for US National Security AI Testing

Google DeepMind, Microsoft, and xAI grant US government early access to frontier AI models for national security testing.

May 5, 2026
5 min read
Set Technobezz as preferred source in Google News
Technobezz
Google DeepMind, Microsoft and xAI Sign Agreements for US National Security AI Testing

Don't Miss the Good Stuff

Get tech news that matters delivered weekly. Join 50,000+ readers.

Google DeepMind, Microsoft and xAI signed agreements Tuesday with the Center for AI Standards and Innovation (CAISI) at the Department of Commerce's National Institute of Standards and Technology. Under the deals, the companies will submit new AI models for evaluation before public release and continue to allow post-deployment assessment.

CAISI has already completed more than 40 such evaluations, including on state-of-the-art models that remain unreleased. The agreements build on pacts OpenAI and Anthropic signed with CAISI's predecessor, the US AI Safety Institute, in 2024. Those earlier deals were later renegotiated to match CAISI's current directives under Secretary Howard Lutnick and America's AI Action Plan.

CAISI now serves as industry's primary point of contact within the US government for testing, collaborative research and best practices related to commercial AI systems. The reviews focus on national security risks tied to cybersecurity, biosecurity and chemical weapons. To thoroughly evaluate these capabilities, developers frequently provide CAISI with models that have reduced or removed safety guardrails, allowing government evaluators to probe for vulnerabilities more aggressively.

"Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," said CAISI Director Chris Fall. "These expanded industry collaborations help us scale our work in the public interest at a critical moment."

Evaluators from across government participate in the testing and provide feedback through the CAISI-convened TRAINS Taskforce, an interagency group focused on AI national security concerns. The agreements also support testing in classified environments.

The Hill reported the announcement comes one day after the White House was reportedly considering an executive order to establish an AI working group for oversight procedures. The administration characterized that reporting as speculation.

Microsoft separately announced a similar agreement in the UK on Tuesday with the government-backed AI Security Institute. "Testing for national security and large-scale public safety risks necessarily must be a collaborative endeavor with governments," Microsoft wrote in a blog post about both deals.

Share

More in News