OpenAI Launches AI Safety Bug Bounty for AI Vulnerabilities
OpenAI has launched a public Safety Bug Bounty program specifically designed to identify AI abuse and safety risks across its product offerings. Hosted on Bugcrowd, the new initiative marks a...
OpenAI has launched a public Safety Bug Bounty program specifically designed to identify AI abuse and safety risks across its product offerings.
Hosted on Bugcrowd, the new initiative marks a significant step in the company’s efforts to address vulnerabilities that fall outside the scope of traditional security flaws but still pose real-world harm potential.
The Safety Bug Bounty program is designed to complement OpenAI’s existing Security Bug Bounty program by accepting submissions that carry meaningful abuse and safety risks even when those issues don’t qualify as conventional security vulnerabilities.
Submissions will be triaged jointly by OpenAI’s Safety and Security Bug Bounty teams and may be rerouted between the two programs depending on scope and ownership.
AI-Specific Risk Categories in Focus
The program targets several distinct categories of AI-specific safety scenarios:
Agentic Risks Including MCP — This covers third-party prompt injection and data exfiltration scenarios where attacker-controlled text can reliably hijack a victim’s AI agent, including Browser, ChatGPT Agent, and similar agentic products, to perform harmful actions or leak sensitive user data.
To qualify, the behavior must be reproducible at least 50% of the time. Reports involving agentic products performing disallowed or potentially harmful actions at scale are also in scope.
OpenAI Proprietary Information — Researchers can report model generations that inadvertently expose reasoning-related proprietary information, as well as vulnerabilities that leak other confidential OpenAI data.
Account and Platform Integrity — This category targets weaknesses in account and platform integrity signals, including bypassing anti-automation controls, manipulating account trust signals, and evading account restrictions, suspensions, or bans.
OpenAI has been explicit about what is out of scope: generic jailbreaks that result in rude language or surface publicly available information will not be considered.
General content-policy bypasses without demonstrable safety or abuse impact are also excluded. However, OpenAI periodically runs private bug bounty campaigns targeting specific harm types, such as Biorisk content issues in ChatGPT Agent and GPT-5, and invites researchers to apply when those programs become available.
For vulnerabilities enabling unauthorized access to features, data, or functionality beyond permitted permissions, researchers are directed to the existing Security Bug Bounty program instead.
The launch signals a growing recognition that AI systems introduce an entirely new attack surface, one that traditional security frameworks weren’t built to address.
By incentivizing safety-focused research alongside conventional vulnerability disclosure, OpenAI is effectively establishing a structured framework for AI-specific threat modeling.
Researchers interested in participating can apply directly through OpenAI’s Safety Bug Bounty page on Bugcrowd.
Disclaimer: HackersRadar reports on cybersecurity threats and incidents for informational and awareness purposes only. We do not engage in hacking activities, data exfiltration, or the hosting or distribution of stolen or leaked information. All content is based on publicly available sources.



No Comment! Be the first one.