A new report from the Google Threat Intelligence Group (GTIG) reveals that sophisticated hacker groups have begun using AI tools to create and deploy zero-day exploits. This revelation confirms what many technology analysts have been warning about for some time, that advanced AI tools will inevitably allow bad actors to discover vulnerabilities they might not have discovered otherwise.
The GTIG report says it has identified a “threat actor using a zero-day exploit that we believe was developed with AI.” The report does not provide additional details on the identity of the “threat actor,” but it does mention that the zero-day exploit was designed to be deployed as part of a “mass exploitation event.” The software in question specifically exploited a vulnerability in a Python script to make it easier to bypass two-factor authentication schemes. Fortunately, the exploit was patched before it could be deployed en masse.
Another reason why this development is concerning is that AI, in addition to discovering exploits, is also helping to accelerate the rate at which hackers can produce malware and test software vulnerabilities. Cyberattacks that previously required months of careful work and development can now be carried out in a much faster time frame. Additionally, hackers have already started using advanced AI to create credible phishing scams. Hackers are also using a creepy new Gmail hack with a super-realistic AI impersonating Google support representatives to trick unsuspecting victims into handing over sensitive credentials.
How Google determined the malware was created using AI
Google’s security team was able to determine that AI was used to create the exploit based on a thorough analysis of the code. Specifically, security researchers have discovered data strings that regularly appear in LLM training data. The codebase also contained a crazy CVSS score, which indicates that the software is trained on cybersecurity texts. This is similar, in a broad sense, to the situation where AI software can cite non-existent case law when writing a legal brief. Indeed, some law firms have gotten in trouble for submitting briefs containing AI hallucinations about made-up cases.
Although hallucinations are among some uncomfortable truths about using Google Gemini, the platform was likely not used to generate the malicious code. Separately, the report notes that malicious actors typically use multiple accounts across different AI models to avoid detection and suspicious behavior that might otherwise set off alarm bells. “While we do not believe that Gemini was used,” the report reads in part, “based on the structure and content of these exploits, we are confident that the actor likely exploited an AI model to support the discovery and weaponization of this vulnerability.”
While hackers using AI are certainly concerning, it’s worth noting that many businesses and security firms are already using AI to proactively scan for security vulnerabilities before releasing new software. Ideally, this would allow companies to strengthen the security of their software before malicious actors have the opportunity to exploit it wildly. As an illustrative example, Mozilla said a few days ago that it leveraged AI tools to help it discover and fix 423 security bugs in just one month.