OpenAI has launched a public Safety Bug Bounty program that pays researchers to identify AI-specific abuse and safety risks that fall outside conventional security vulnerabilities, covering areas including third-party prompt injection, agentic product misbehavior, exposure of proprietary model information, and account integrity bypasses. The program complements OpenAI's existing Security Bug Bounty and will accept submissions triaged jointly by both teams, with reports rerouted depending on scope. General jailbreaks are out of scope, though OpenAI says it periodically runs separate private campaigns targeting specific harm types such as biorisk content in ChatGPT Agent and GPT-5.
OpenAI launches public Safety Bug Bounty program to reward researchers who find AI abuse and agentic risks

Paul Drecksler is the founder and editor of Shopifreaks E-commerce Newsletter, covering the most important stories in e-commerce.
Never miss important e-commerce news
Our weekly newsletter is read religiously by 20,000+ e-commerce professionals.
