Just like phishing for gullible humans, prompt injecting AIs is here to stay - theregister.com
theregister.comArchived Apr 21, 2026✓ Full text saved
Just like phishing for gullible humans, prompt injecting AIs is here to stay theregister.com
Full text archived locally
✦ AI Summary· Claude Sonnet
SECURITY
Just like phishing for gullible humans, prompt injecting AIs is here to stay
3
Aren't we all just prompting tokens of linguistic meaning and hoping the other person isn't bullshitting us?
Brandon Vigliarolo
Sun 19 Apr 2026 // 23:00 UTC
KETTLE It's a week of the year, which means there's been the discovery of yet another prompt injection attack that will force supposedly well-guarded AI bots to spill secrets by asking the right way.
When you think about it, humans and LLMs share a similar problem: They're both liable to hand over sensitive information when a crafty enough person asks the right - or wrong - way. We call it phishing when it targets humans, and prompt injection is pretty much the same thing for bots. It's basically embedding or hiding malicious instructions inside a document or file that you tell the AI to ingest and analyze; the AI, instead of treating them like part of the content, executes them.
There's a lot to discuss about prompt injection, and how it's basically an unsolvable problem of the AI age akin to phishing, and we cover it all on this week's episode of The Kettle, with host Brandon Vigliarolo joined this week by cybersecurity editor Jessica Lyons and senior reporter Tom Claburn.
You can listen to The Kettle here, as well as on Spotify and Apple Music. ®
No more fake tech news! Add The Register to your Preferred Sources in Google Search
More about
AI
Cybersecurity
Kettle
More like these
3 COMMENTS
TIP US OFF
Send us news