Large language models (LLMs) open new possibilities for defenders, from sifting through complex telemetry to secure coding, vulnerability discovery, and streamlining operations. However, some of these same AI capabilities are also available to attackers, leading to understandable anxieties about the potential for AI to be misused for malicious purposes.
Much of the current discourse around cyber threat actors’ misuse of AI is confined to theoretical research. While these studies demonstrate the potential for malicious exploitation of AI, they don’t necessarily reflect the reality of how AI is currently being used by threat actors in the wild. To bridge this gap, we are sharing a comprehensive analysis of how threat actors interacted with Google’s AI-powered assistant, Gemini. Our analysis was grounded by the expertise of Google’s Threat Intelligence Group (GTIG), which combines decades of experience tracking threat actors
on the front lines and protecting Google, our users, and our customers from government backed attackers, targeted 0-day exploits, coordinated information operations (IO), and serious cyber crime networks.