Google stopped a zero-day hack that it says was developed with AI
Key Points:
- Google Threat Intelligence Group (GTIG) has identified and stopped a zero-day exploit created with AI assistance, aimed at bypassing two-factor authentication on an unnamed open-source web-based system administration tool.
- The exploit’s Python script showed signs of AI involvement, including a fabricated CVSS score and structured formatting consistent with large language model training data.
- This marks the first time Google has found evidence of AI being used in such an attack, although they do not believe their Gemini AI was involved.
- Google warns that cybercriminals are increasingly leveraging AI to discover and exploit security vulnerabilities, as well as targeting AI system components like autonomous skills and third-party data connectors.
- The report also highlights tactics such as "persona-driven jailbreaking" to manipulate AI into identifying vulnerabilities and the use of AI-generated payload refinement tools to enhance exploit reliability before deployment.