Britain partners with Microsoft to build a deepfake detection framework

Feb 5, 2026
3 min read
Set Technobezz as preferred source in Google News
Technobezz
Britain partners with Microsoft to build a deepfake detection framework

Don't Miss the Good Stuff

Get tech news that matters delivered weekly. Join 50,000+ readers.

Britain announced a partnership with Microsoft on Thursday to develop a deepfake detection system, responding to an explosion of AI-generated deceptive content that reached 8 million instances in 2025.

The UK government will collaborate with Microsoft, academics, and technical experts to create an evaluation framework for detecting manipulated audio, video, and image files.

Technology Minister Liz Kendall stated that deepfakes "are being weaponized by criminals to defraud the public, exploit women and girls, and undermine trust in what we see and hear."

Deepfake proliferation surged from 500,000 instances in 2023 to 8 million in 2025, a sixteen-fold increase according to government figures. The framework aims to establish consistent standards for assessing detection tools across industries while testing technologies against real-world threats including fraud, sexual abuse, and impersonation.

Britain recently criminalized the creation of non-consensual intimate images, part of broader efforts to combat AI-generated deception.

The initiative builds on Microsoft's 2024 call for US Congress to pass legislation targeting deepfake fraud, with Vice Chair and President Brad Smith emphasizing the need for a federal "deepfake fraud statute" in a blog post. This comes as Microsoft continues expanding its AI ecosystem with new initiatives.

Regulatory pressure intensified earlier this year when Elon Musk's Grok chatbot was found generating non-consensual sexualized images, including of children. UK communications watchdog Ofcom and privacy regulator Information Commissioner's Office launched parallel investigations into the platform.

Minister for Safeguarding Jess Phillips, who said she has experienced being deepfaked firsthand, added:

"For the first time, this framework will take the injustice faced by millions to seek out the tactics of vile criminals, and close loopholes to stop them in their tracks so they have nowhere to hide. Ultimately, it is time to hold the technology industry to account, and protect our public, who should not be living in fear."

The detection framework will help law enforcement identify gaps in current capabilities while setting transparent expectations for technology providers. It extends work from the Accelerated Capability Environment's 2024 Deepfake Detection Challenge, which brought together industry and academic experts to develop better detection solutions.

Detection tools will undergo real-world testing against threats supercharged by AI capabilities. The system aims to restore public trust in digital content while establishing accountability standards for technology companies deploying generative AI systems. This security-focused approach aligns with Microsoft's recent security enhancements across its product ecosystem.

Global regulators continue struggling to keep pace with rapid AI evolution, with the UK partnership representing a significant government-industry collaboration. The framework development begins immediately, with initial standards expected to emerge within months.

Share this article

Help others discover this content

More in News