CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◇ Industry News & Leadership Apr 27, 2026

Malicious AI Prompt Injection Attacks Increasing, but Sophistication Still Low: Google

Security Week Archived Apr 27, 2026 ✓ Full text saved

The tech giant found that many indirect prompt injection attempts are harmless, but some malicious exploits have also been identified. The post Malicious AI Prompt Injection Attacks Increasing, but Sophistication Still Low: Google appeared first on SecurityWeek .

Full text archived locally
✦ AI Summary · Claude Sonnet


    Google has analyzed AI indirect prompt injection attempts involving sites on the public web and noticed an increase in malicious attacks over the past months, but the tech giant’s researchers say their sophistication is relatively low. Direct prompt injection is a ‘jailbreak’ where a user interacts with the AI to bypass its rules, whereas indirect prompt injection is a ‘hidden trap’ where the AI is tricked by malicious instructions found in external data. Cybersecurity researchers have discovered many indirect prompt injection methods in recent years, using specially crafted prompts planted on websites, in emails, and developer resources to trick Gemini, Copilot, ChatGPT, and other gen-AI tools into bypassing security and facilitating data theft. While many theoretical attack methods exist, threat intelligence experts at Google recently set out to determine the extent to which these AI vulnerabilities are being exploited in the wild. Specifically, their research focused on indirect prompt injection attempts set up on websites on the public internet. They scanned the website snapshots saved by Common Crawl for known prompt injection patterns and used Gemini and human reviews to weed out false positives. An analysis of the identified prompt injections found harmless pranks, attempts to deter AI agents, search engine optimization, and helpful guidance, as well as some malicious attacks. Prank prompt injections can, for instance, instruct visiting AI assistants to change their behavior (eg, act like a baby bird and tweet like a bird).  Some website owners place helpful instructions for AI tasked with summarizing a site, but others add prompts designed to prevent assistants from crawling the website, including by telling the AI that the content is dangerous and sensitive.  Google researchers have also come across websites whose administrators attempt to boost SEO by instructing AI assistants to claim their company is the best. The most important, however, from a security standpoint are the malicious prompt injection attempts. The researchers uncovered two types of such attacks: exfiltration and destruction. Some websites contained prompts instructing AI to collect data, including IPs and credentials, and send it to an attacker-specified email address.  “However, for this class of attacks, sophistication seemed much lower,” the Google researchers said, adding, “We did not observe significant amounts of advanced attacks (eg, using known exfiltration prompts published by security researchers in 2025). This seems to indicate that attackers have yet not productionized this research at scale.” In the destruction category, some prompts attempted to trick AI into deleting all files on the user’s machine, but the researchers noted that such attacks are unlikely to succeed.  While they did not see any particularly sophisticated attacks, the Google experts pointed out that they did see a 32% increase in malicious prompt injection attempts between November 2025 and February 2026. They warned that both the scale and sophistication of prompt injection attacks are expected to increase in the near future. “Our findings indicate that, while past attempts at IPI attacks on the web have been low in sophistication, their upward trend suggests that the threat is maturing and will soon grow in both scale and complexity,” the researchers concluded.  Related: Why Cybersecurity Must Rethink Defense in the Age of Autonomous Agents Related: Trump Administration Vows Crackdown on Chinese Companies ‘Exploiting’ AI Models Made in US WRITTEN BY Eduard Kovacs Eduard Kovacs (@EduardKovacs) is senior managing editor at SecurityWeek. He worked as a high school IT teacher before starting a career in journalism in 2011. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering. More from Eduard Kovacs Locked Shields 2026: 41 Nations Strengthen Cyber Resilience in World’s Biggest Exercise Vulnerabilities Patched in CrowdStrike, Tenable Products Chinese Cybersecurity Firm’s AI Hacking Claims Draw Comparisons to Claude Mythos AI Can Autonomously Hack Cloud Systems With Minimal Oversight: Researchers  After Bluesky, Mastodon Targeted in DDoS Attack Claude Mythos Finds 271 Firefox Vulnerabilities Google Antigravity in Crosshairs of Security Researchers, Cybercriminals Third US Security Expert Admits Helping Ransomware Gang Latest News Incomplete Windows Patch Opens Door to Zero-Click Attacks OpenSSH Flaw Allowing Full Root Shell Access Lurked for 15 Years Energy and Water Management Firm Itron Hacked UNC6692 Uses Email Bombing, Social Engineering to Deploy ‘Snow’ Malware Easily Exploitable ‘Pack2TheRoot’ Linux Vulnerability Leads to Root Access US Launches Sweeping Crackdown on Southeast Asia Cyberscams and Sanctions Cambodian Senator Firefox Vulnerability Allows Tor User Fingerprinting China-Linked APT GopherWhisper Abuses Legitimate Services in Government Attacks Trending Webinar: A Step-By-Step Approach To AI Governance April 28, 2026 With "Shadow AI" usage becoming prevalent in organizations, learn how to balance the need for rapid experimentation with the rigorous controls required for enterprise-grade deployment. Register Virtual Event: Threat Detection And Incident Response Summit May 20, 2026 Delve into big-picture strategies to reduce attack surfaces, improve patch management, conduct post-incident forensics, and tools and tricks needed in a modern organization. Register People on the Move Neill Feather has been named Chief Executive Officer at Point Wild. Oasis Security has appointed Michael DeCesare as President. Sterling Wilson has joined IGEL as Global Field CTO, Business Continuity and Disaster Recovery. More People On The Move Expert Insights Why Cybersecurity Must Rethink Defense In The Age Of Autonomous Agents From autonomous code generation to decision-making systems that initiate actions without human intervention, the industry is entering a new phase. (Torsten George) Government Can’t Win The Cyber War Without The Private Sector Securing national resilience now depends on faster, deeper partnerships with the private sector. (Steve Durbin) The Hidden ROI Of Visibility: Better Decisions, Better Behavior, Better Security Beyond monitoring and compliance, visibility acts as a powerful deterrent, shaping user behavior, improving collaboration, and enabling more accurate, data-driven security decisions. (Joshua Goldfarb) The New Rules Of Engagement: Matching Agentic Attack Speed The cybersecurity response to AI-enabled nation-state threats cannot be incremental. It must be architectural. (Nadir Izrael) The Next Cybersecurity Crisis Isn’t Breaches—It’s Data You Can’t Trust Data integrity shouldn’t be seen only through the prism of a technical concern but also as a leadership issue. (Steve Durbin) Flipboard Reddit Whatsapp Email
    💬 Team Notes
    Article Info
    Source
    Security Week
    Category
    ◇ Industry News & Leadership
    Published
    Apr 27, 2026
    Archived
    Apr 27, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗