A significant security vulnerability has been identified in Google’s latest API key architecture, potentially exposing the personal data of millions of Android users. A report by cybersecurity firm CloudSEK reveals that hardcoded API keys found within the source code of several high-profile applications could allow malicious actors to access private interactions with the Gemini AI chatbot. According to the findings, the flaw impacts at least 22 applications that collectively account for over 500 million installations.
The vulnerability stems from an architecture Google initially described as secure for integration into Android codebases. However, the investigation found that these keys gained "credential privileges" after being embedded, allowing hackers to view user-shared images, audio, and documents stored via the Files API. India Faces Cyber Attack Surge: Over 3,100 Weekly Cyberattacks as AI-Driven Automation Transforms Digital Threat Landscape, Says Report.
Scale of Exposure and Impacted Applications
CloudSEK’s mobile app security engine, BeVigil, conducted a scan of the top 10,000 Android applications to assess the extent of the leak. The researchers identified 32 active Google API keys that had been hardcoded into the applications' source code. Hardcoding—the practice of embedding data directly into the code—makes it relatively easy for attackers to extract sensitive credentials through reverse engineering.
Among the notable applications identified in the report are Google Pay for Business, OYO Hotel, The Hindu, WAStickersApps, and ISS Live Now. The report suggests that many developers likely followed Google’s own documentation for embedding services like Firebase or Google Maps, which inadvertently led to the inclusion of the vulnerable "Alza..." format API keys.
Risks to User Privacy and AI Context
The primary risk for users involves the data they provide to Google’s Gemini AI. Because the compromised keys provide access to the Files API, a hacker could potentially read, copy, or exfiltrate any files shared with the chatbot. Additionally, the "cached AI context"—the information the AI uses to maintain the flow of a conversation—is also at risk of being intercepted.
Beyond individual privacy, the flaw poses a financial threat to developers and businesses. Since Gemini API integration is a paid service, unauthorised usage by hackers could result in significantly inflated billing for the affected companies. This follows a similar pattern discovered by Truffle Security earlier this year involving Google Cloud projects.
Recommended Mitigation for Developers
In response to the findings, security experts are urging developers to immediately review their API key configurations within the Google Cloud Platform (GCP). CloudSEK has recommended that companies avoid hardcoding any API keys directly into mobile app source code, suggesting the use of more secure backend proxy servers or secret management tools instead. Cyber Fraud Racket Busted in Delhi: 2 Arrested for Running Sophisticated INR 74 Lakh Online Investment Scam Promising High Returns Back.
While Google has not yet issued a formal patch for the architectural design, the report serves as a warning for the industry regarding the rapid integration of AI tools. Developers are advised to check if their current API keys have unnecessary permissions that could grant a broader range of access than required for their specific app functions.
(The above story first appeared on LatestLY on Apr 10, 2026 08:36 AM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).













Quickly


