Google blocked a coordinated campaign that sent more than 100,000 queries to its Gemini AI system in an attempt to reverse-engineer the chatbot's capabilities. The company's Threat Intelligence Group identified the activity as model extraction, where attackers use repeated prompts to study and replicate proprietary AI logic.
The massive prompt volume targeted Gemini's reasoning functions through what security researchers call distillation attacks. Attackers systematically queried the system across different question types and languages, attempting to map response patterns that could reveal internal decision-making processes.
Google detected the unusual traffic in real time and adjusted protections to prevent exposure of sensitive reasoning details.
Model extraction represents a new frontier in AI security threats. Unlike traditional hacking, these operations use legitimate API access to bombard systems with carefully crafted prompts. The goal is to analyze thousands of responses and train competing models using the extracted patterns.
Google considers this activity intellectual property theft and terms of service violations.
Private sector entities and independent researchers appear responsible for most extraction attempts, according to Google's quarterly threat report. The company observed frequent model extraction attacks from organizations worldwide seeking to clone proprietary AI logic. While Google declined to name specific offenders, the scale suggests organized efforts rather than isolated incidents.
John Hultquist, chief analyst at Google's Threat Intelligence Group, told NBC News that similar cloning attempts could become more common as businesses build custom AI systems.
The financial stakes are substantial, with major tech companies investing billions in AI development and training.
Google responded by disabling associated accounts and strengthening safeguards against misuse. The company's systems automatically flagged the 100,000-prompt campaign and implemented countermeasures to protect internal reasoning traces. Additional security layers now monitor for unusual query patterns that might indicate extraction attempts.
Beyond model cloning, Google's report documents other AI misuse cases. Threat actors experimented with AI-assisted phishing campaigns and malware that leveraged Gemini's API for code generation. Each incident triggered account suspensions and security updates to prevent further abuse.
The security concerns emerge as Google expands Gemini's availability across multiple platforms. In January, the company announced new Gemini features for Chrome browser users, including image generation tools and automated browsing capabilities. These additions aim to make Chrome more personalized through AI integration.
Google also launched Personal Intelligence features in the Gemini app earlier this year. The functionality connects information from Google services like Gmail and Photos to provide tailored responses. This expansion reflects Google's strategy to embed AI across its product ecosystem.
Enterprise adoption continues through government and education partnerships. California Community Colleges gained access to Gemini tools for more than 2 million students and faculty in September 2025. The system-wide agreement provides free AI training and educational resources across 116 colleges.
Government contracts also expanded Gemini's reach. The U.S. General Services Administration announced a comprehensive Gemini for Government offering in August 2025, providing federal agencies with AI and cloud services. The War Department launched Google Cloud's Gemini for Government on its GenAI.mil platform in December 2025.
Technical deployments include on-premises options through Google Distributed Cloud, with public preview scheduled for Q3 2025. The company partnered with NVIDIA to bring Gemini models to Blackwell systems available through multiple channels.
Security challenges will likely increase as AI models become more valuable assets. Smaller AI companies face particular vulnerability due to limited security resources compared to established tech giants. The industry must develop new protections against extraction techniques that exploit legitimate API access.
Google's detection of the 100,000-prompt campaign demonstrates the evolving nature of AI security threats. As language models handle more sensitive data and critical functions, protection against intellectual property theft becomes increasingly important for maintaining competitive advantages and user trust.















