Anthropic Claude in Chrome Extension Vulnerable to Hijacking, Researchers Warn ClaudeBleed Flaw Could Expose Gmail, Google Drive Data

Anthropic’s Claude in Chrome extension faces security concerns over a vulnerability called ‘ClaudeBleed’. Researchers at LayerX found that malicious extensions could hijack Claude to access Gmail, Drive, and GitHub data. While Anthropic released version 1.0.70 to patch the flaw, experts warn that autonomous browsing modes may still leave users exposed to potential exploitation.

Representational Image (Photo Credit: Methodshop from Pixabay)

Anthropic’s AI-powered Claude in Chrome extension has come under scrutiny after security researchers claimed that a flaw in the tool could allow malicious Chrome extensions to exploit Claude’s browser automation capabilities. According to reports by CSO Online and LayerX researchers, the issue, dubbed ‘ClaudeBleed’, may allow attackers to trigger actions through Claude even with browser extensions that request little or no special permissions.

Researchers alleged the flaw could potentially be abused to access sensitive information, send emails, or interact with authenticated browser sessions across services such as Gmail, Google Drive, and GitHub. This vulnerability highlights the emerging risks associated with AI browser agents that possess deep access to user sessions and cross-site automation features. Cybersecurity Alert: SilverFox Group Launches Global Phishing Attacks Using ‘ABCDoor’ Python Backdoor, Targets Indian Firms.

The ClaudeBleed Vulnerability Explained

According to LayerX researcher Aviad Gispan, the issue stems from how the Claude extension handles communication between scripts running on claude.ai and the extension itself. The researchers claimed that the extension relied on Chrome’s ‘externally_connectable’ feature, which allows websites or other extensions to communicate with browser extensions.

However, the Claude extension reportedly trusted scripts running under the claude.ai browser origin without sufficiently verifying whether those scripts genuinely came from Anthropic or had been injected by another extension. As a result, even a zero-permission extension could potentially send commands to Claude’s internal messaging interface and gain control over select browser capabilities, effectively weakening Chrome’s extension isolation model.

Proof of Concept Attack Scenarios

Researchers demonstrated multiple proof-of-concept attack scenarios to show how the flaw could be abused. These included sharing Google Drive files externally, sending emails through Gmail, extracting code from private GitHub repositories, and summarising inbox messages before deleting traces of the activity.

Furthermore, researchers were able to manipulate webpage elements to influence how Claude interpreted browser interfaces. By modifying buttons or hiding warning indicators within webpages, attackers could potentially make risky actions appear safe to the AI assistant. A technique referred to as ‘approval looping’ was also identified, where repeated prompts could allegedly weaken some of Claude’s confirmation safeguards.

Anthropic Patch and Remaining Concerns

The issue was reported to Anthropic on 27 April, and the company released version 1.0.70 of the extension on 6 May to address the concerns. The update introduced additional internal security checks and approval flows intended to prevent remote command execution. However, LayerX researchers claim the fix only partially addresses the underlying issue. Tech Layoffs 2026: Over 101,550 Employees Hit by Job Cuts in 120 Companies So Far This Year; Cloudflare Joins List With 1,100 Reductions.

LayerX alleges that some attack paths remain possible, particularly through autonomous browsing settings such as ‘Act without asking’. The researchers have recommended stricter extension authentication and binding approvals to one-time actions to further secure the tool. As AI assistants gain more autonomy, security experts urge users to remain cautious about the permissions granted to browser-based agents.

Rating:3

TruLY Score 3 – Believable; Needs Further Research | On a Trust Scale of 0-5 this article has scored 3 on LatestLY, this article appears believable but may need additional verification. It is based on reporting from news websites or verified journalists (Business Standard ), but lacks supporting official confirmation. Readers are advised to treat the information as credible but continue to follow up for updates or confirmations

(The above story first appeared on LatestLY on May 11, 2026 04:35 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).

Share Now

Share Now