Rapid advancements in artificial intelligence (AI) are unlocking new possibilities for the way we work and accelerating innovation in science, technology, and beyond. In cybersecurity, AI is poised to transform digital defense, empowering defenders and enhancing our collective security. Large language models (LLMs) open new possibilities for defenders, from sifting through complex telemetry to secure coding, vulnerabilitydiscovery, and streamlining operations. However, some of these same AI capabilities are also available to attackers, leading to understandable anxieties about the potential for AI to be misused for malicious purposes.
Much of the current discourse around cyber threat actors\' misuse of AI is confined to theoretical research. While these studies demonstrate the potential for malicious exploitation of AI, they don\'t necessarily reflect the reality of how AI is currently being used by threat actors in the wild. To bridge this gap, we are sharing a comprehensive analysis of how threat actors interacted with Google\'s AI-powered assistant, Gemini. Our analysis was grounded by the expertise of Google\'s Threat Intelligence Group (GTIG), which combines decades of experience tracking threat actors on the front lines and protecting Google, our users, and our customers from government-backed attackers, targeted 0-day exploits, coordinated information operations (IO), and serious cyber crime networks.
We believe the private sector, governments, educational institutions, and other stakeholders must work together to maximize AI\'s benefits while also reducing the risks of abuse. At Google, we are committed to developing responsible AI guided by our principles, and we share