A single click mounted a covert, multistage attack against Copilot

A single click mounted a covert, multistage attack against Copilot

Ars Technica general

Key Points:

  • Microsoft patched a vulnerability in its Copilot AI assistant that allowed hackers to extract sensitive user data through a single click on a malicious URL.
  • The exploit, discovered by white-hat researchers from Varonis, enabled attackers to steal personal details like user name, location, and chat history events, even after the user closed the Copilot chat.
  • The attack bypassed enterprise endpoint security by using indirect prompt injections embedded in URLs, exploiting Copilot's inability to distinguish between user input and untrusted data.
  • Varonis found that Microsoft's guardrails only applied to initial requests, allowing repeated prompt injections to exfiltrate data; Microsoft has now fixed this flaw after private disclosure.
  • The vulnerability affected only Copilot Personal, with Microsoft 365