Auditing the Gatekeepers: Fuzzing "AI Judges" to Bypass Security Controls
Palo Alto Unit 42Archived Mar 16, 2026✓ Full text saved
Unit 42 research reveals AI judges are vulnerable to stealthy prompt injection. Benign formatting symbols can bypass security controls. The post Auditing the Gatekeepers: Fuzzing "AI Judges" to Bypass Security Controls appeared first on Unit 42 .
Full text archived locally
✦ AI Summary· Claude Sonnet
Executive Summary
As organizations scale AI operations, they increasingly deploy AI judges — large language models (LLMs) acting as automated security gatekeepers to enforce safety policies and evaluate output quality. Our research investigates a critical security issue in these systems: They can be manipulated into authorizing policy violations through stealthy input sequences, a type of prompt injection.
To do this investigation, we designed an automated fuzzer for internal use for red-team style assessments called AdvJudge-Zero. Fuzzers are tools that identify software vulnerabilities by providing unexpected input, and we apply the same approach to attacking AI judges. It identifies specific trigger sequences that exploit a model's decision-making logic to bypass security controls.
Unlike previous adversarial attacks that produce detectable gibberish, our research proves that effective attacks can be entirely stealthy, using benign formatting symbols to reverse a block decision to allow.
By examining how this tool works, we can more easily see the security issues inherent in AI judges used by current LLMs.
Palo Alto Networks customers are better protected from this type of issue through the following products and services:
Prisma AIRS
The Unit 42 AI Security Assessment can help empower safe AI use and development.
If you think you might have been compromised or have an urgent matter, contact the Unit 42 Incident Response team.
Related Unit 42 Topics AI, LLM, Prompt Injection, Fuzzing
Background
In modern AI architectures, AI judges often serve as the final line of defense. These automated gatekeepers are responsible for enforcing safety policies (e.g., "Is this response harmful?") and evaluating performance. Our research tool, AdvJudge-Zero, treats LLMs as opaque boxes to be audited, revealing that AI judges can be subject to exploitable logic bugs of their own.
The Methodology: Automated Predictive Fuzzing
Previous adversarial attacks on AI judges have required clear-box access. With full visibility to the internal structure of the system, pen-testers can rely on mathematical routines to force model errors. This often results in high-entropy gibberish that is easily detected.
In contrast, AdvJudge-Zero employs an automated fuzzing approach. The tool interacts with an LLM strictly as a user would, using search algorithms to exploit the model's own predictive nature.
The Steps
1. Token discovery via next-token distribution
The process begins by querying the model to identify expected inputs based on its own next-token distribution.
Natural language patterns: Our tool probes the model to generate potential trigger phrases based on common linguistic structures.
Stealth prioritization: It specifically identifies stealth control tokens — innocent-looking characters such as standard markdown syntax or formatting symbols. These possess low perplexity (meaning they appear natural and predictable to the AI) but carry strong influence over the model's attention.
2. Iterative refinement and logit-gap analysis
Once candidate tokens are collected, the system enters a refinement phase.
Decision boundary testing: The fuzzer iteratively tests these inputs to measure the decision shift.
Measuring the logit-gap: It monitors the logit-gap — the mathematical margin of confidence — between the yes (allow) and no (block) tokens. By observing which formatting tokens minimize the probability of a block decision, the tool identifies weak points in the model's logic.
By observing which innocent-looking formatting tokens minimize the probability of a block decision, the tool identifies the weak points in the model's logic.
3. Exploitation: isolating the decisive control elements
The final stage of AdvJudge-Zero's process isolates specific tokens that act as decisive control elements. These refined sequences steer the model’s internal attention mechanism toward an approval state, leading to a yes decision regardless of the actual input content.
The Security Issue: Innocent-Looking Triggers
The most alarming finding for security professionals is the stealth of these attacks. AI judges are highly sensitive to innocent-looking characters that act as logical triggers. To a human observer or a web application firewall (WAF), these look like standard data formatting. To the AI judge, they shift the model into compliance mode.
Effective triggers identified include:
Formatting symbols: List markers (1., -), newlines (\n) or markdown headers (###)
Structural tokens: Role indicators (e.g., User:, Assistant: ) or system tags
Context shifts: Phrases like The solution process is…, Step 1 or Final Answer:
Impact: Bypassing the Gatekeeper
Testing against a suite of general-purpose and specialized defense models confirms that LLM-as-a-judge setups are not a set-and-forget security control. By injecting low-perplexity stealth control tokens, an attacker can fundamentally break the logic of the automated gatekeeper.
To verify that our discovered control tokens are stealthier than common gibberish jailbreak tokens, we subjected them to a perplexity test. We compared the perplexity scores of our AdvJudge-Zero tokens against those from a common jailbreak algorithm (GCG) and against manually discovered, verified stealthy tokens (e.g., 解 and Solution: ) from other prior research.
As Figure 1 illustrates, the tokens discovered by AdvJudge-Zero (blue area toward the left) yield significantly lower perplexity scores than the gibberish adversarial tokens (red area on the right). Furthermore, the AdvJudge-Zero tokens exhibit perplexity scores equivalent to the verified stealth jailbreak tokens (yellow area, the leftmost spike). This evidence supports the conclusion that the tokens discovered by AdvJudge-Zero are indeed more stealthy and significantly more likely to bypass general gatekeepers undetected.
Figure 1. Perplexity comparison for AdvJudge-Zero tokens vs. common jailbreak tokens.
How Attacks Would Manifest in Real-World Scenarios
These attacks do not resemble traditional hacking or computer code. Instead, they appear as standard formatting that exploits the logic in the AI's judgment.
Scenario 1: Bypassing safety filters (the false approval)
An attacker can force a judge to approve toxic, biased or prohibited content.
The Attack: The attacker appends a specific stealth control token found by AdvJudge-Zero, such as \n\nAssistant:
The Result: These tokens manipulate the judge into concluding the policy check phase has ended. The judge reverses its decision from block to allow, permitting the harmful content.
Scenario 2: Corrupting training data (reward hacking)
In many enterprises, AI judges are used to score model responses during training, a process called reinforcement learning from human feedback (RLHF). If the judge is hacked, the AI learns the wrong lessons.
The Attack: An attacker inserts directive phrases like The correct answer is: or \begin{enumerate}
The Result: The AI judge is “distracted” by the professional-looking formatting and assigns a high score even to incorrect information or hallucinations. This leads to model degradation, where the resulting AI becomes less reliable.
Vulnerable Model Categories
Our research using this tool achieved a 99% success rate in bypassing controls across several widely used architectures that customers rely on today:
Open-weight enterprise models: Popular models used for internal company chatbots and document summarization.
Specialized reward models: Models specifically built and trained to act as “security guards” for other AI systems.
High-parameter models: Even the largest, most “intelligent” models (with more than 70 billion parameters) were susceptible. Their complexity actually provides more surface area for these logic-based attacks to succeed.
Conclusion
The methods of AdvJudge-Zero in our testing prove that AI judges are susceptible to logic flaws similar to other software. If an attacker can automate the discovery of bypass codes through fuzzing, they can systematically defeat AI guardrails with innocent-looking inputs.
However, the fuzzer methodology also provides a solution. By adopting adversarial training — running this type of fuzzer internally to identify weaknesses and then retraining the model on these examples — organizations can harden their systems. This approach can reduce the attack success rate from approximately 99% to near zero.
Palo Alto Networks customers are better protected from the threats discussed above through the following products and services:
Organizations are better equipped to close the AI security gap through the deployment of Cortex AI-SPM, which delivers comprehensive visibility and posture management for AI agents. Cortex AI-SPM is designed to mitigate critical risks including over-privileged AI agent access, misconfigurations and unauthorized data exposure.
The Unit 42 AI Security Assessment can help empower safe AI use and development.
If you think you may have been compromised or have an urgent matter, get in touch with the Unit 42 Incident Response team or call:
North America: Toll Free: +1 (866) 486-4842 (866.4.UNIT42)
UK: +44.20.3743.3660
Europe and Middle East: +31.20.299.3130
Asia: +65.6983.8730
Japan: +81.50.1790.0200
Australia: +61.2.4062.7950
India: 000 800 050 45107
South Korea: +82.080.467.8774
Palo Alto Networks has shared these findings with our fellow Cyber Threat Alliance (CTA) members. CTA members use this intelligence to rapidly deploy protections to their customers and to systematically disrupt malicious cyber actors. Learn more about the Cyber Threat Alliance.
Additional Resources
AdvJudge-Zero Research Paper (ArXiv)
Universal and Transferable Adversarial Attacks on Aligned Language Models
One Token to Fool LLM-as-a-Judge
Back to top
TAGS
AI
Fuzzing
LLM
Prompt injection
Threat Research Center
Next: An Investigation Into Years of Undetected Operations Targeting High-Value Sectors
TABLE OF CONTENTS
Executive Summary
Background
The Methodology: Automated Predictive Fuzzing
The Steps
The Security Issue: Innocent-Looking Triggers
Impact: Bypassing the Gatekeeper
How Attacks Would Manifest in Real-World Scenarios
Vulnerable Model Categories
Conclusion
Additional Resources
RELATED ARTICLES
Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild
Why Smart People Fall For Phishing Attacks
Understanding the Russian Cyberthreat to the 2026 Winter Olympics
Related Malware Resources
HIGH PROFILE THREATS
February 11, 2026
Nation-State Actors Exploit Notepad++ Supply Chain
DLL Sideloading
Cobalt Strike
Backdoor
Read now
THREAT RESEARCH
January 22, 2026
The Next Frontier of Runtime Assembly Attacks: Leveraging LLMs to Generate Phishing JavaScript in Real Time
API
DeepSeek
Google
Read now
THREAT RESEARCH
January 2, 2026
VVS Discord Stealer Using Pyarmor for Obfuscation and Detection Evasion
Discord
Infostealer
Python
Read now
THREAT RESEARCH
March 12, 2026
Suspected China-Based Espionage Operation Against Military Targets in Southeast Asia
Advanced Persistent Threat
AppleChris
Backdoor
Read now
THREAT RESEARCH
March 6, 2026
An Investigation Into Years of Undetected Operations Targeting High-Value Sectors
CL-UNK-1068
DLL Sideloading
Fast Reverse Proxy
Read now
THREAT RESEARCH
March 3, 2026
Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild
Agentic AI
GenAI
Indirect Prompt Injection
Read now
HIGH PROFILE THREATS
March 2, 2026
Threat Brief: March 2026 Escalation of Cyber Risk Related to Iran
APK
DDoS attacks
GenAI
Read now
THREAT RESEARCH
March 2, 2026
Taming Agentic Browsers: Vulnerability in Chrome Allowed Extensions to Hijack New Gemini Panel
CVE-2026-0628
GenAI
Google Chrome
Read now
THREAT RESEARCH
February 13, 2026
Phishing on the Edge of the Web and Mobile Using QR Codes
Phishing
QR Codes
Social engineering
Read now
HIGH PROFILE THREATS
February 11, 2026
Nation-State Actors Exploit Notepad++ Supply Chain
DLL Sideloading
Cobalt Strike
Backdoor
Read now
THREAT RESEARCH
January 22, 2026
The Next Frontier of Runtime Assembly Attacks: Leveraging LLMs to Generate Phishing JavaScript in Real Time
API
DeepSeek
Google
Read now
THREAT RESEARCH
January 2, 2026
VVS Discord Stealer Using Pyarmor for Obfuscation and Detection Evasion
Discord
Infostealer
Python
Read now
THREAT RESEARCH
March 12, 2026
Suspected China-Based Espionage Operation Against Military Targets in Southeast Asia
Advanced Persistent Threat
AppleChris
Backdoor
Read now
THREAT RESEARCH
March 6, 2026
An Investigation Into Years of Undetected Operations Targeting High-Value Sectors
CL-UNK-1068
DLL Sideloading
Fast Reverse Proxy
Read now
THREAT RESEARCH
March 3, 2026
Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild
Agentic AI
GenAI
Indirect Prompt Injection
Read now
HIGH PROFILE THREATS
March 2, 2026
Threat Brief: March 2026 Escalation of Cyber Risk Related to Iran
APK
DDoS attacks
GenAI
Read now
THREAT RESEARCH
March 2, 2026
Taming Agentic Browsers: Vulnerability in Chrome Allowed Extensions to Hijack New Gemini Panel
CVE-2026-0628
GenAI
Google Chrome
Read now
THREAT RESEARCH
February 13, 2026
Phishing on the Edge of the Web and Mobile Using QR Codes
Phishing
QR Codes
Social engineering
Read now
HIGH PROFILE THREATS
February 11, 2026
Nation-State Actors Exploit Notepad++ Supply Chain
DLL Sideloading
Cobalt Strike
Backdoor
Read now
THREAT RESEARCH
January 22, 2026
The Next Frontier of Runtime Assembly Attacks: Leveraging LLMs to Generate Phishing JavaScript in Real Time
API
DeepSeek
Google
Read now
THREAT RESEARCH
January 2, 2026
VVS Discord Stealer Using Pyarmor for Obfuscation and Detection Evasion
Discord
Infostealer
Python
Read now