OpenAI Sued for Sharing ChatGPT Data with Google, Class-Action Privacy
OpenAI Global LLC now faces a class-action complaint filed in the Southern District of California. The lawsuit alleges the company surreptitiously integrated Meta’s Facebook Pixel and Google...
OpenAI Global LLC now faces a class-action complaint filed in the Southern District of California. The lawsuit alleges the company surreptitiously integrated Meta’s Facebook Pixel and Google Analytics into its ChatGPT web interface. Consequently, highly sensitive chatbot conversations became monetizable tracking data for online advertising ecosystems.
Filed by California resident Amargo Couture on behalf of all U.S. users who entered queries into ChatGPT.com, the suit claims OpenAI disclosed users’ chat topics, identifiers, and contact details to Meta and Google without consent, in violation of the federal Electronic Communications Privacy Act (ECPA), California’s Invasion of Privacy Act (CIPA), and state constitutional privacy rights.
According to the complaint, ChatGPT is routinely used to discuss “sensitive and personal topics” such as finances, health, and legal issues, with some estimates suggesting that a significant portion of company data pasted into ChatGPT is confidential.
Users allegedly had a reasonable expectation that these conversations would remain between themselves and OpenAI, not be piped to third‑party ad tech platforms.
The litigation lands amid a broader wave of privacy and copyright fights over generative AI and follows earlier suits that challenged OpenAI’s data‑collection and training practices.
OpenAI Hit With Privacy Action Lawsuit
For Meta, the complaint centers on the Facebook Pixel code embedded in ChatGPT’s web pages, which allegedly triggers silent, real‑time HTTP requests to Facebook’s servers every time a user interacts with the site.
These requests are said to include both the content‑derived context (for example, the browser tab title “Super Bowl 2005 Winner” derived from a user query) and a set of cookies such as c_user, fr, and fbp that can be tied back to a specific Facebook account via the user’s Facebook ID.
Meta’s own documentation is cited to argue that this telemetry is then fed into its “Core Audiences,” “Custom Audiences,” and “Lookalike Audiences” systems for highly granular ad targeting across Facebook and Instagram.
On the Google side, the complaint alleges that Google Analytics and associated Google Ads tags capture hashed email addresses used to sign up or log in to ChatGPT, as well as device and browser identifiers and other Google Signals cookies that map activity to logged‑in Google profiles.
Sample network traces in the file show event payloads where a hashed email appears under an “em” field, alongside cookies such as Secure‑3PSID that are associated with Google account identities.
Google Analytics is then accused of enriching this data with cross‑device behavior, demographic signals, and remarketing features, enabling OpenAI and Google to retarget users based on their ChatGPT activity and to fold those events into broader advertising and analytics products.
Substantively, the suit asserts that OpenAI “intentionally installed wiretaps” on ChatGPT.com by embedding Meta and Google tracking scripts, thereby aiding third‑party interception of users’ communications in transit.
Under ECPA, the plaintiffs argue that each ChatGPT interaction constitutes an “electronic communication,” and that copying those communications to Meta and Google via client‑side JavaScript and tracking pixels qualifies as an unlawful interception, disclosure, and use.
Under CIPA Sections 631 and 632, they characterize the Meta Pixel and Google Analytics tags—as well as the associated cookies and servers as “machines, instruments, or contrivances” used to read or learn the contents of communications and to eavesdrop on confidential sessions without all‑party consent.
The proposed nationwide class covers all U.S. residents whose personally identifiable information (PII) and ChatGPT communications were disclosed to third parties via the website, with a California subclass seeking statutory damages under CIPA of up to 5,000 USD per violation.
Plaintiffs are also pursuing injunctive relief to force OpenAI to remove or re‑architect its tracking integrations and to prohibit further disclosures of chatbot‑derived data to ad tech partners.
If certified and successful, the case could expose OpenAI to massive statutory damage exposure and effectively put browser‑based tracking of AI chats under the same legal microscope as health‑site pixels and session‑replay scripts that have recently drawn aggressive enforcement and litigation.
For security and privacy teams, the allegations cut to the heart of how AI front‑ends are instrumented: embedding generic marketing pixels and analytics tags into AI tools that handle highly sensitive, free‑form text may create unexpected surveillance channels that regulators and courts treat as wiretaps.
The complaint’s detailed network captures, from tab titles to cookie values, offer a blueprint for how plaintiffs’ experts are now inspecting AI properties for covert data flows to third‑party domains.
Organizations integrating commercial LLM front‑ends or building their own should expect similar scrutiny and urgently revisit their telemetry, cookie consent flows, and data‑sharing contracts to ensure that sensitive AI conversations are not silently leaking into ad ecosystems under legacy web‑tracking configurations.
Disclaimer: HackersRadar reports on cybersecurity threats and incidents for informational and awareness purposes only. We do not engage in hacking activities, data exfiltration, or the hosting or distribution of stolen or leaked information. All content is based on publicly available sources.



No Comment! Be the first one.