Google is Redrawing the Map of AI Security: Bounty Program Up to $30,000
- Next News
- Oct 7, 2025
- 1 min read
On Monday, Google launched a groundbreaking bounty program to incentivize experts and enthusiasts to uncover vulnerabilities in its artificial intelligence products, offering generous rewards of up to $30,000.

The program focuses on areas of weakness and undesired behaviors that hackers could exploit—such as executing unauthorized commands via Google Home devices, or instruction injections that allow sensitive user data extraction and transfer to malicious parties.
Google’s program clearly defines what constitutes an AI vulnerability (like exploiting large language models or generative AI systems to cause harm or breach security), with priority given to unauthorized actions leading to direct harm or breaches involving users or their data. Examples include opening smart curtains or turning off lights through exploited loopholes.
Over the past two years, these discoveries have enabled experts to earn over $430,000, reflecting Google’s ongoing effort to engage AI security researchers in strengthening its products against misuse.
The program incorporates an escalating reward system, granting $20,000 for exposing unauthorized harmful actions in main products such as Search, Gemini, Gmail, and Drive, while top-tier discoveries can bring the total up to $30,000 depending on the report’s quality and novelty.
Lower rewards apply to vulnerabilities found in secondary Google products or lesser violations like stealing secret model parameters. Google also emphasizes the need to report content-related problems through internal channels, to ensure thorough model diagnosis and sustained safety training.
This initiative comes as a direct response to the rising threats accompanying AI’s rapid progress, setting a new standard in user protection and supporting advanced security research in this critical field.









Comments