Hackers News Hackers News
  • CyberSecurity News
  • Threats
  • Attacks
  • Vulnerabilities
  • Breaches
  • Comparisons

Social Media

Hackers News Hackers News
  • CyberSecurity News
  • Threats
  • Attacks
  • Vulnerabilities
  • Breaches
  • Comparisons
Search the Site
Popular Searches:
technology Amazon AI
Recent Posts
Palo Alto PAN-OS 0-Day Exploited to Execute Arbitrary Code With
May 14, 2026
OpenAI Sued for Sharing ChatGPT Data with Google, Class-Action Privacy
May 14, 2026
Langflow CVE-2026-33017 Exploited to Steal AWS Keys and Deploy
May 14, 2026
Home/CyberSecurity News/OpenAI Sued for Sharing ChatGPT Data with Google, Class-Action Privacy
CyberSecurity News

OpenAI Sued for Sharing ChatGPT Data with Google, Class-Action Privacy

OpenAI Global LLC now faces a class-action complaint filed in the Southern District of California. The lawsuit alleges the company surreptitiously integrated Meta’s Facebook Pixel and Google...

Jennifer sherman
Jennifer sherman
May 14, 2026 4 Min Read
2 0

OpenAI Global LLC now faces a class-action complaint filed in the Southern District of California. The lawsuit alleges the company surreptitiously integrated Meta’s Facebook Pixel and Google Analytics into its ChatGPT web interface. Consequently, highly sensitive chatbot conversations became monetizable tracking data for online advertising ecosystems.

Filed by California resident Amargo Couture on behalf of all U.S. users who entered queries into ChatGPT.com, the suit claims OpenAI disclosed users’ chat topics, identifiers, and contact details to Meta and Google without consent, in violation of the federal Electronic Communications Privacy Act (ECPA), California’s Invasion of Privacy Act (CIPA), and state constitutional privacy rights.

According to the complaint, ChatGPT is routinely used to discuss “sensitive and personal topics” such as finances, health, and legal issues, with some estimates suggesting that a significant portion of company data pasted into ChatGPT is confidential.

Users allegedly had a reasonable expectation that these conversations would remain between themselves and OpenAI, not be piped to third‑party ad tech platforms.

The litigation lands amid a broader wave of privacy and copyright fights over generative AI and follows earlier suits that challenged OpenAI’s data‑collection and training practices.

OpenAI Hit With Privacy Action Lawsuit

For Meta, the complaint centers on the Facebook Pixel code embedded in ChatGPT’s web pages, which allegedly triggers silent, real‑time HTTP requests to Facebook’s servers every time a user interacts with the site.

These requests are said to include both the content‑derived context (for example, the browser tab title “Super Bowl 2005 Winner” derived from a user query) and a set of cookies such as c_user, fr, and fbp that can be tied back to a specific Facebook account via the user’s Facebook ID.

Meta’s own documentation is cited to argue that this telemetry is then fed into its “Core Audiences,” “Custom Audiences,” and “Lookalike Audiences” systems for highly granular ad targeting across Facebook and Instagram.

On the Google side, the complaint alleges that Google Analytics and associated Google Ads tags capture hashed email addresses used to sign up or log in to ChatGPT, as well as device and browser identifiers and other Google Signals cookies that map activity to logged‑in Google profiles.

Sample network traces in the file show event payloads where a hashed email appears under an “em” field, alongside cookies such as Secure‑3PSID that are associated with Google account identities.

Google Analytics is then accused of enriching this data with cross‑device behavior, demographic signals, and remarketing features, enabling OpenAI and Google to retarget users based on their ChatGPT activity and to fold those events into broader advertising and analytics products.

Substantively, the suit asserts that OpenAI “intentionally installed wiretaps” on ChatGPT.com by embedding Meta and Google tracking scripts, thereby aiding third‑party interception of users’ communications in transit.

Under ECPA, the plaintiffs argue that each ChatGPT interaction constitutes an “electronic communication,” and that copying those communications to Meta and Google via client‑side JavaScript and tracking pixels qualifies as an unlawful interception, disclosure, and use.

Under CIPA Sections 631 and 632, they characterize the Meta Pixel and Google Analytics tags—as well as the associated cookies and servers as “machines, instruments, or contrivances” used to read or learn the contents of communications and to eavesdrop on confidential sessions without all‑party consent.

The proposed nationwide class covers all U.S. residents whose personally identifiable information (PII) and ChatGPT communications were disclosed to third parties via the website, with a California subclass seeking statutory damages under CIPA of up to 5,000 USD per violation.

Plaintiffs are also pursuing injunctive relief to force OpenAI to remove or re‑architect its tracking integrations and to prohibit further disclosures of chatbot‑derived data to ad tech partners.

If certified and successful, the case could expose OpenAI to massive statutory damage exposure and effectively put browser‑based tracking of AI chats under the same legal microscope as health‑site pixels and session‑replay scripts that have recently drawn aggressive enforcement and litigation.

For security and privacy teams, the allegations cut to the heart of how AI front‑ends are instrumented: embedding generic marketing pixels and analytics tags into AI tools that handle highly sensitive, free‑form text may create unexpected surveillance channels that regulators and courts treat as wiretaps.

The complaint’s detailed network captures, from tab titles to cookie values, offer a blueprint for how plaintiffs’ experts are now inspecting AI properties for covert data flows to third‑party domains.

Organizations integrating commercial LLM front‑ends or building their own should expect similar scrutiny and urgently revisit their telemetry, cookie consent flows, and data‑sharing contracts to ensure that sensitive AI conversations are not silently leaking into ad ecosystems under legacy web‑tracking configurations.

Disclaimer: HackersRadar reports on cybersecurity threats and incidents for informational and awareness purposes only. We do not engage in hacking activities, data exfiltration, or the hosting or distribution of stolen or leaked information. All content is based on publicly available sources.

Tags:

Security

Share Article

Jennifer sherman

Jennifer sherman

Jennifer is a cybersecurity news reporter covering data breaches, ransomware campaigns, and dark web markets. With a background in incident response, Jennifer provides unique insights into how organizations respond to cyber attacks and the evolving tactics of threat actors. Her reporting has covered major breaches affecting millions of users and has helped organizations understand emerging threats. Jennifer combines technical knowledge with investigative journalism to deliver in-depth coverage of cybersecurity incidents.

Previous Post

Langflow CVE-2026-33017 Exploited to Steal AWS Keys and Deploy

Next Post

Palo Alto PAN-OS 0-Day Exploited to Execute Arbitrary Code With

No Comment! Be the first one.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts
Critical Windows DNS Client Flaw Allows Remote Code Execution
May 14, 2026
Critical NGINX Vulnerability Allows Remote Code Execution –
May 14, 2026
Critical MongoDB Flaw Allows Arbitrary Code Execution
May 14, 2026
Top Authors
Marcus Rodriguez
Marcus Rodriguez
Jennifer sherman
Jennifer sherman
Sarah simpson
Sarah simpson
Let's Connect
156k
2.25m
285k

Related Posts

Jennifer sherman
By Jennifer sherman
Threats

GlassWorm Attacks macOS via Malicious VS Code…

January 1, 2026
Emy Elsamnoudy
By Emy Elsamnoudy
Attacks

ClickFix Attack Hides Malicious Code via Stegan Security

January 1, 2026
Sarah simpson
By Sarah simpson
Vulnerabilities

MongoBleed Detector Tool Detects Critical MongoDB CVE-

January 1, 2026
Emy Elsamnoudy
By Emy Elsamnoudy
Breaches

Conti Ransomware Gang Leaders & Infrastructure Exposed

January 1, 2026
Hackers News Hackers News
  • [email protected]

Quick Links

  • Contact Us
  • Privacy Policy
  • Terms of service

Categories

Attacks
Breaches
Comparisons
CyberSecurity News
Threats
Vulnerabilities

Let's keep in touch

receive fresh updates and breaking cyber news every day and week!

All Rights Reserved by HackersRadar ©2026

Follow Us