CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◇ Industry News & Leadership May 12, 2026

AI-Built Zero-Day Nearly Powered Mass Attack

Data Breach Today Archived May 12, 2026 ✓ Full text saved

Google Says Criminals Used AI to Discover and Code Exploit A cybercriminal group came close to launching a mass attack earlier this year, armed with a software exploit that an AI model had built from scratch, said Google researchers. Google said it worked with the affected vendor to patch the flaw before an attack could be launched.

Full text archived locally
✦ AI Summary · Claude Sonnet


    Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , The Future of AI & Cybersecurity AI-Built Zero-Day Nearly Powered Mass Attack Google Says Criminals Used AI to Discover and Code Exploit Rashmi Ramesh (rashmiramesh_) • May 12, 2026     Share Post Share Credit Eligible Get Permission Image: Shutterstock A cybercriminal group came close to launching a mass attack earlier this year, armed with a software exploit that an artificial intelligence model had built from scratch, said Google researchers. See Also: Context Drives Security in Agentic AI Era Google's Threat Intelligence Group's disclosure of a criminal operation deploying a working zero-day where the exploit code is AI-generated is different from other reports about AI-generated vulnerabilities. Google said it worked with the affected vendor to patch the flaw before the attack could be launched, though it did not name the vendor or the tool. The flaw itself was a bypass of two-factor authentication embedded in a Python script targeting a popular open-source web administration tool. The vulnerability was not the kind that conventional security scanners are built to catch, because it did not stem from common implementation errors like memory corruption or improper input handling, but from a high-level semantic logic flaw where the developer hardcoded a trust assumption. Traditional tools scan for crashes and known error patterns. AI models can read what the developer intended the code to do and identify contradictions between that intent and how the code actually behaves. Researchers have previously demonstrated that AI can find and exploit vulnerabilities in controlled lab settings. This includes a 2024 University of Illinois study showing that GPT-4 could exploit known vulnerabilities with an 87% success rate when given descriptions of the flaws, as well as a separate paper from the same UIUC group demonstrating that teams of AI agents could exploit real-world zero-day vulnerabilities in a controlled research environment. What distinguished Google researchers' findings was the apparent criminal intent behind the activity and forensic evidence suggesting AI was used to assist and carry out the technical work of discovering and coding an exploit as part of a campaign designed for mass deployment. The exploit code script contained an abundance of educational comments, including a fabricated severity score and used a structured, textbook-style Python format highly characteristic of training data used to build large language models. Google said it has high confidence an AI model was used but does not believe Google's own Gemini was involved. GTIG said the actor "likely leveraged an AI model to support the discovery and weaponization," meaning the AI appears to have done the technical heavy lifting of finding and coding the exploit, while humans planned and directed the broader campaign. "AI has changed the economics of exploit development," said Nicole Carignan, senior vice president of security and AI strategy at Darktrace. "It industrializes what was previously a high-skill, time-intensive process, turning it into something that is more repeatable and scalable, that can be done faster and by a broader range of actors." The zero-day was the most concrete finding in a report that also documents a broader and more systematic use of AI across the offensive threat landscape. State-sponsored groups from China and North Korea feature prominently. North Korea's APT45 sent thousands of automated, repetitive prompts to AI models to systematically analyze known software flaws and validate working exploits, building what Google described as an arsenal of exploit capabilities that would be impractical to assemble without AI. A China-linked group tracked as UNC2814 attempted to manipulate Gemini by instructing it to assume the role of a network security expert specializing in embedded devices, a technique designed to coax the model into providing vulnerability research it would otherwise decline to assist with. In a more sophisticated operation, threat actors experimented with a GitHub repository called "wooyun-legacy," designed as an AI code skill plugin that integrates a distilled knowledge base of over 85,000 real-world vulnerability cases collected by the Chinese bug bounty platform WooYun between 2010 and 2016. By feeding the model this historical vulnerability data, they effectively trained it, within a single session, to approach code analysis like an experienced security researcher and prioritize the kinds of logic flaws a general-purpose model might otherwise overlook. Russia-linked groups used AI differently, mainly for hiding malware. Google identified two malware families, CanFail and LongStream, deployed against Ukrainian targets. The malware used AI-generated filler code to pad their source files with inert, benign-looking routines. In LongStream, researchers found 32 separate instances of the code checking the system's daylight saving status - repetitive queries with no functional purpose, inserted to make the malicious code harder to identify amid the noise. The report also expanded on capabilities of PromptSpy, an Android backdoor first identified by cybersecurity firm Eset, that uses Google's Gemini API to control infected devices without human direction. Google analysis revealed capabilities beyond what had previously been reported. The malware contains an autonomous module that maps the visible layout of a device's screen, sends that layout to Gemini and receives back precise coordinates and gesture instructions like clicks and swipes that it then executes to navigate the phone on the attacker's behalf. It can also capture biometric login data, such as fingerprint patterns or PIN sequences, to regain access to a locked device. When a victim tries to uninstall it, PromptSpy locates the uninstall button on screen and places an invisible layer over it, intercepting the victim's tap so the button appears not to work. The malware's command infrastructure, including its Gemini API keys and relay server, can be updated remotely without redeploying the malware itself. Google said it disabled the assets associated with PromptSpy, adding that no apps containing it are on the Google Play Store. Threat actors are bypassing usage controls and safety guardrails that AI providers put in place by running their operations through a network of proxy relay services, pooled accounts and automated registration pipelines that cycle through free trials before accounts can be flagged or banned. A March 2026 study from the CISPA Helmholtz Center for Information Security identified 17 such shadow API services that claim to offer access to official AI model services without regional restrictions. The researchers found that models accessed through these proxies performed significantly worse, with accuracy on a standard medical knowledge benchmark dropping from roughly 84% with the official API to approximately 37% across the shadow services. Every prompt and response passing through these servers was also potentially visible to their operators, raising the risk that sensitive data traversing these channels could be captured and misused. Carignan said that this access infrastructure gives attackers a structural advantage that defenders cannot yet match. "Bad actors have built out an infrastructure that enables them to gain persistent, free access to premium commercial AI models," she said. "That means they can spend time building sophisticated capabilities in the best AI models and there is no limit to their usage. Compared with the more cautious approach taken by defenders, that gives a clear advantage to the attackers."
    💬 Team Notes
    Article Info
    Source
    Data Breach Today
    Category
    ◇ Industry News & Leadership
    Published
    May 12, 2026
    Archived
    May 12, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗