CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◇ Industry News & Leadership May 11, 2026

Hackers Use AI for Exploit Development, Attack Automation

Dark Reading Archived May 11, 2026 ✓ Full text saved

Cyber adversaries have long used AI, but now attackers are using large language models to develop exploits and orchestrate complex attacks.

Full text archived locally
✦ AI Summary · Claude Sonnet


    СLOUD SECURITY VULNERABILITIES & THREATS THREAT INTELLIGENCE APPLICATION SECURITY NEWS Hackers Use AI for Exploit Development, Attack Automation Cyber adversaries have long used AI, but now attackers are using large language models to develop exploits and orchestrate complex attacks. Alexander Culafi,Senior News Writer,Dark Reading May 11, 2026 4 Min Read SOURCE: NICOELNINO VIA ALAMY STOCK PHOTO Threat actors are abusing AI tools in increasingly sophisticated ways, including exploit development and attack orchestration. Google today published new research tracking how adversaries leverage AI in their cyber operations. Since large language model (LLM) tools became widely available, threat actors have leveraged the technology in a wide range of ways, such as crafting phishing lures, coding malware, and conducting reconnaissance. They are also using AI, as Google detailed, for vulnerability research and exploit development. This research arrives as defenders contemplate how Anthropic's Claude Mythos model (and by extension Project Glasswing) will reshape the security ecosystem for years to come, as Anthropic claims Mythos is capable of finding critical zero-day vulnerabilities using natural language instruction.  While this report doesn't claim threat actors are using anything like Mythos, Google's Threat Intelligence Group (GTIG) covers some of the cutting-edge ways attackers are using AI today. Related:After Replacing TeamPCP Malware, 'PCPJack' Steals Cloud Secrets No Mythos Needed: Exploit Developed With AI For example, GTIG said it identified a threat actor using a zero-day exploit the company believes was developed with AI – possibly the first of its kind. According to the report, the vulnerability is "implemented in a Python script that enables the user to bypass two-factor authentication (2FA) on a popular open-source, web-based system administration tool." The vulnerability requires valid user credentials to exploit. Although the threat actor was (or possibly is) planning to use the vulnerability on a massive scale, GTIG disclosed the bug to the appropriate vendor in the hopes of disrupting potential threat activity.  "Although we do not believe Gemini was used, based on the structure and content of these exploits, we have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability," the report read. "For example, the script contains an abundance of educational docstrings, including a hallucinated CVSS score, and uses a structured, textbook Pythonic format highly characteristic of LLMs training data (e.g., detailed help menus and the clean _C ANSI color class)." Threat actors associated with China and North Korea have shown particular interest in using LLMs for vulnerability research. For example, GTIG has observed suspected Chinese actor UNC2814 prompting Gemini to take on the role of a network security researcher conducting vulnerability research into embedded devices like TP-Link firmware. The actor tells the AI they are "auditing it for pre-authentication remote code execution (RCE) vulnerabilities." Related:If AI's So Smart, Why Does It Keep Deleting Production Databases? North Korean actor Silent Chollima, also known as APT45, has been observed "sending thousands of repetitive prompts that recursively analyze different CVEs and validate PoC exploits." This, Google said, facilitates more robust exploit capabilities than the model would have otherwise. Threat actors have similarly trained on a specialized vulnerability repository known as "wooyun-legacy" with more than 85,000 real world vulnerability cases collected by the Chinese bug bounty platform WooYun between 2010 and 2016.  Threat actors are also experimenting with agentic tools like OpenClaw and OneClaw to assist in vulnerability research.  AI-Powered Attack Orchestration But one of the most striking use cases detailed in the report involved the use of AI in orchestrating attacks, as detailed with a malware family known as "PromptSpy." This is an Android backdoor first detailed by ESET, which abuses Gemini by prompting it to ensure the malicious app remains in the "recent apps" list.  GTIG's analysis found that the backdoor used AI for other purposes, primarily "centered around navigating the Android user interface and autonomously interpreting real-time user activity for follow-on actions." For example, it can capture biometric data to replay authentication gestures to regain access to a compromised device.  Related:TeamPCP Hits SAP Packages With 'Mini Shai-Hulud' Attack Moreover, threat actors are using agentic workflows to "operationalize autonomous frameworks to execute multi-stage security tasks." A China-nexus actor deployed agentic tools in an attack against a Japanese technology firm and an East Asian cybersecurity platform, according to the report. Agentic tools like Hextrike and Strix were used to maintain persistence across the attack surface and to both automate and validate vulnerabilities.  "This combination of autonomous reconnaissance and automated verification suggests a transition toward AI-driven frameworks that can scale discovery activities with minimal human oversight," GTIG said. While slight and in limited cases, it is noteworthy to see threat actors move from heavily human-focused operations to campaigns where the AI takes more control. This mirrors the progression of AI in the defender space, where some organizations are moving away from human-in-the-loop thinking and toward human-on-the-loop, where agents are the primary AI orchestrators making moment-to-moment decisions and humans only intervene when necessary. Don't miss the latest Dark Reading Confidential podcast, How the Story of a USB Penetration Test Went Viral. Two decades ago Dark Reading posted its first blockbuster piece — a column by a pen tester who sprinkled rigged thumb drives around a credit union parking lot and let curious employees do the rest. This episode looks back at the history-making piece with its author, Steve Stasiukonis. Listen now! About the Author Alexander Culafi Senior News Writer, Dark Reading Alex is an award-winning writer, journalist, and podcast host based in Boston. After cutting his teeth writing for independent gaming publications as a teenager, he graduated from Emerson College in 2016 with a Bachelor of Science in journalism. He has previously been published on VentureFizz, Search Security, Nintendo World Report, and elsewhere. In his spare time, Alex hosts the weekly Nintendo podcast Talk Nintendo Podcast and works on personal writing projects, including two previously self-published science fiction novels. Want more Dark Reading stories in your Google search results? ADD US NOW More Insights Industry Reports How Enterprises Are Developing Secure Applications Inside RSAC 2026: security leaders reveal the risks redefining your defense strategy How Enterprises Are Harnessing Emerging Technologies in Cybersecurity Ditch the Data Center: Understanding Flexible Cloud Infrastructure Security Management 2025 State of Malware Access More Research Webinars The New Attack Surface: How Attackers Are Exploiting OAuth to Own Your Cloud Workspace Prompt Injection Is Just the Start: Securing LLMs in AI Systems Anatomy of a Data Breach: What to Do if it Happens to You How Well Can You See What's in Your Cloud? Implementing CTEM: Beyond Vulnerability Management More Webinars You May Also Like СLOUD SECURITY APT41 Delivers 'Zero-Detection' Backdoor to Harvest Cloud Credentials by Elizabeth Montalbano APR 13, 2026 СLOUD SECURITY TeamPCP Turns Cloud Infrastructure Into Crime Bots by Jai Vijayan, Contributing Writer FEB 09, 2026 СLOUD SECURITY The Cloud Edge Is the New Attack Surface by Robert Lemos, Contributing Writer SEP 17, 2025 СLOUD SECURITY Phishing Empire Runs Undetected on Google, Cloudflare by Elizabeth Montalbano, Contributing Writer SEP 04, 2025 Editor's Choice THREAT INTELLIGENCE From Stuxnet to ChatGPT: 20 News Events That Shaped Cyber byDark Reading Editorial Team MAY 6, 2026 31 MIN READ CYBER RISK Physical Cargo Theft Gets a Boost From Cybercriminals byRobert Lemos MAY 4, 2026 5 MIN READ CYBER RISK NSA Chief During Snowden Affair Shares Regrets, Reflections 13 Years Later byDark Reading Editorial Team APR 28, 2026 Want more Dark Reading stories in your Google search results? Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox. SUBSCRIBE RSAC 2026: key news & insights At RSAC 2026, Dark Reading captured critical intelligence on AI, new attack methods, geopolitics, and much more Get Your Recap Webinars The New Attack Surface: How Attackers Are Exploiting OAuth to Own Your Cloud Workspace WED, JUNE 24,2026 AT 1PM EST Prompt Injection Is Just the Start: Securing LLMs in AI Systems TUES, MAY 26, 2026, AT 1PM EST Anatomy of a Data Breach: What to Do if it Happens to You JUNE 18TH, 2026 | 11:00AM -5:00PM ET | DOORS OPEN AT 10:30AM ET How Well Can You See What's in Your Cloud? THURS, JUNE 4, 2026 AT 1:00PM EST Implementing CTEM: Beyond Vulnerability Management THURS, MAY 21, 2026 AT 1PM EST More Webinars BLACK HAT USA | MANDALAY BAY, LAS VEGAS The premier cybersecurity event of the year returns to Mandalay Bay with a re‑engineered, six‑day program built to ignite innovation, push boundaries, and bring the global security community together like never before. Use code: DARKREADING to save $200 on a Briefings pass or $100 on a Business pass. GET YOUR PASS
    💬 Team Notes
    Article Info
    Source
    Dark Reading
    Category
    ◇ Industry News & Leadership
    Published
    May 11, 2026
    Archived
    May 11, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗