Google Flags Massive AI Cloning Attempt as Over 100000 Malicious Prompts Target Gemini Logic
Google has revealed that it blocked a large scale “model extraction” campaign aimed at stealing the proprietary logic of its Gemini artificial intelligence system. According to findings from the Google Threat Intelligence Group, researchers detected more than 100000 malicious prompts crafted to extract the internal reasoning processes of the AI model.
New Delhi, February 12: Google has revealed that it blocked a large scale “model extraction” campaign aimed at stealing the proprietary logic of its Gemini artificial intelligence system. According to findings from the Google Threat Intelligence Group, researchers detected more than 100000 malicious prompts crafted to extract the internal reasoning processes of the AI model.
What Is a Model Extraction Attack?
Unlike traditional cyberattacks that attempt to breach secure networks, model extraction relies on legitimate access points such as public APIs. Attackers bombard the AI system with specially designed prompts, attempting to force it to reveal its reasoning traces, meaning the step by step logic behind its responses rather than just the final output. Google Issues Urgent Security Alert As Arsink Malware Hits Over 45,000 Android Users Worldwide; Know Steps To Protect Your Data.
By collecting these detailed responses, threat actors can train a smaller and cheaper student model that mimics the behavior of a more advanced system like Gemini. This method allows competitors or malicious entities to replicate cutting edge AI capabilities without investing heavily in research infrastructure and computing power.
100000 Prompt Campaign Detected
Google confirmed that its security systems identified and disrupted the 100000 prompt campaign in real time. The company has since implemented additional safeguards to prevent automated querying techniques designed to clone its frontier AI models. Alphabet’s100-Year Bond: Know Why Google’s Parent Is Issuing a Rare ‘Century Bond’ in 2026.
The report also notes that while advanced persistent threat groups use generative AI for phishing and malware development, model extraction efforts are often linked to private sector actors and independent researchers seeking to replicate proprietary AI logic.
Rising AI Security Threats
The incident highlights a broader cybersecurity shift. As AI becomes integrated into tools like email and calendars, new risks such as prompt injections and data manipulation are emerging. Google previously disclosed that Gemini has faced indirect prompt injection attempts, where hidden instructions embedded in calendar invites or documents attempt to trigger unauthorized actions.
In response, Google DeepMind has expanded automated red teaming exercises to proactively identify vulnerabilities.
Google’s latest threat assessment underscores a growing reality: while artificial intelligence strengthens digital defenses, it is also becoming a prime target for intellectual property theft and large scale cloning attempts.
(The above story first appeared on LatestLY on Feb 12, 2026 01:36 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).