Google thwarts effort hacker group use AI 'mass exploitation event'
Key Points:
- Google's Threat Intelligence Group reported thwarting a hacker effort to use AI models for planning a large-scale exploitation of a zero-day vulnerability that bypassed two-factor authentication.
- The hackers utilized AI tools like OpenClaw to identify and exploit software flaws, posing significant risks to companies, government agencies, and other organizations.
- Google emphasized it does not believe its own Gemini AI model was involved in the attack and did not disclose the responsible hacker group.
- The report highlights growing concerns in the cybersecurity industry about AI being leveraged by criminals, prompting companies like Anthropic and OpenAI to limit AI model releases to vetted cybersecurity teams.
- State-linked hacker groups from China and North Korea have shown notable interest in using AI for vulnerability discovery and cyberattacks, according to Google's findings.