CyberIntel ⬡ News
★ Saved ◆ Cyber Reads

// AI & Machine Learning
Intel Feed

cyberintel.kalymoon.com  ·  2686 articles  ·  updated every 4 hours · grows forever

2686Total
2643Full Text
May 16, 2026Latest
◈ Women in Cyber ◉ Threat Intelligence ◎ How-To & Tutorials ⬡ Vulnerabilities & CVEs 🔍 Digital Forensics ◍ Incident Response & DFIR ◆ Security Tools & Reviews ◇ Industry News & Leadership ✉ Email Security 🛡 Active Threats ⚠ Critical CVEs ◐ Insider Threat & DLP ◌ Quantum Computing ◬ AI & Machine Learning
🔥 Trending Topics · Last 48h
◬ AI & Machine Learning May 13, 2026
Continuous Discovery of Vulnerabilities in LLM Serving Systems with Fuzzing

arXiv:2605.11202v1 Announce Type: new Abstract: LLM inference and serving systems have become security-critical infrastructure; however, many of their most concerning failures arise from the serving l…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Adversarial SQL Injection Generation with LLM-Based Architectures

arXiv:2605.11188v1 Announce Type: new Abstract: SQL injection (SQLi) attacks are still one of the serious attacks ranked in the Open Worldwide Application Security Project (OWASP) Top 10 threats. Toda…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Benchmarking LLM-Based Static Analysis for Secure Smart Contract Development: Reliability, Limitations, and Potential Hybrid Solutions

arXiv:2605.11163v1 Announce Type: new Abstract: The irreversible nature of blockchain transactions makes the identification of smart contract vulnerabilities an essential requirement for secure system…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
FedSurrogate: Backdoor Defense in Federated Learning via Layer Criticality and Surrogate Replacement

arXiv:2605.11122v1 Announce Type: new Abstract: Federated Learning remains highly susceptible to backdoor attacks--malicious clients inject targeted behaviours into the global model. Existing defenses…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
ExploitGym: Can AI Agents Turn Security Vulnerabilities into Real Attacks?

arXiv:2605.11086v1 Announce Type: new Abstract: AI agents are rapidly gaining capabilities that could significantly reshape cybersecurity, making rigorous evaluation urgent. A critical capability is e…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
MCPShield: Content-Aware Attack Detection for LLM Agent Tool-Call Traffic

arXiv:2605.11053v1 Announce Type: new Abstract: The Model Context Protocol (MCP) has become a widely adopted interface for LLM agents to invoke external tools, yet learned monitoring of MCP tool-call …

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Red-Teaming Agent Execution Contexts: Open-World Security Evaluation on OpenClaw

arXiv:2605.11047v1 Announce Type: new Abstract: Agentic language-model systems increasingly rely on mutable execution contexts, including files, memory, tools, skills, and auxiliary artifacts, creatin…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
A Multi-Interface Firmware Acquisition and Validation Methodology for Low-Cost Consumer Drones: A Case Study on Three Holy Stone Platforms

arXiv:2605.11040v1 Announce Type: new Abstract: Consumer unmanned aerial vehicles (UAVs) have evolved into capable computing platforms, yet their embedded firmware remains largely inaccessible to the …

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
The Granularity Mismatch in Agent Security: Argument-Level Provenance Solves Enforcement and Isolates the LLM Reasoning Bottleneck

arXiv:2605.11039v1 Announce Type: new Abstract: Tool-using LLM agents must act on untrusted webpages, emails, files, and API outputs while issuing privileged tool calls. Existing defenses often mediat…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Sequential Behavioral Watermarking for LLM Agents

arXiv:2605.11036v1 Announce Type: new Abstract: LLM-based agents act through sequences of executable decisions, but their trajectories provide little evidence of which agent or policy produced them, m…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
MambaNetBurst: Direct Byte-level Network Traffic Classification without Tokenization or Pretraining

arXiv:2605.11034v1 Announce Type: new Abstract: We present MambaNetBurst, a compact tokenizer-free byte-level sequence classifier for network burst classification based on a Mamba-2 backbone. In contr…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Portable Agent Memory: A Protocol for Cryptographically-Verified Memory Transfer Across Heterogeneous AI Agents

arXiv:2605.11032v1 Announce Type: new Abstract: We present Portable Agent Memory, an open protocol and reference implementation for transferring persistent memory state across heterogeneous AI agents.…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
FragBench: Cross-Session Attacks Hidden in Benign-Looking Fragments

arXiv:2605.11029v1 Announce Type: new Abstract: An attacker can split a malicious goal into sub-prompts that each look benign on their own and only become harmful in combination. Existing LLM safety b…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
AgentShield: Deception-based Compromise Detection for Tool-using LLM Agents

arXiv:2605.11026v1 Announce Type: new Abstract: Defenses against indirect prompt injection (IPI) in tool-using LLM agents share two structural weaknesses. First, they all attempt to prevent attacks ra…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
DCVD: Dual-Channel Cross-Modal Fusion for Joint Vulnerability Detection and Localization

arXiv:2605.11015v1 Announce Type: new Abstract: Software vulnerability detection plays a critical role in ensuring system security, where real-world auditing requires not only determining whether a fu…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
The Authorization-Execution Gap Is a Major Safety and Security Problem in Open-World Agents

arXiv:2605.11003v1 Announce Type: new Abstract: This position paper argues that the Authorization-Execution Gap (AEG) is a major safety and security problem in open-world agents. The AEG is the diverg…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
MT-JailBench: A Modular Benchmark for Understanding Multi-Turn Jailbreak Attacks

arXiv:2605.11002v1 Announce Type: new Abstract: Multi-turn jailbreaks exploit the ability of large language models to accumulate and act on conversational context. Instead of stating a harmful request…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Few-Shot Truly Benign DPO Attack for Jailbreaking LLMs

arXiv:2605.10998v1 Announce Type: new Abstract: Fine-tuning APIs make frontier LLMs easy to customize, but they can also weaken safety alignment during fine-tuning. While prior work shows that benign …

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
PASA: A Principled Embedding-Space Watermarking Approach for LLM-Generated Text under Semantic-Invariant Attacks

arXiv:2605.10977v1 Announce Type: new Abstract: Watermarking for large language models (LLMs) is a promising approach for detecting LLM-generated text and enabling responsible deployment. However, exi…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
AI Security Adds Defender Burden Faster Than Skills Catch Up - Cybersecurity Insiders

AI Security Adds Defender Burden Faster Than Skills Catch Up Cybersecurity Insiders

Cybersecurity Insiders Read →
◬ AI & Machine Learning May 12, 2026
The Echo Amplifies the Knowledge: Somatic Marker Analogues in Language Models via Emotion Vector Re-Injection

arXiv:2605.08611v1 Announce Type: new Abstract: Current language model memory systems store what happened but not how it felt. This distinction -- between semantic memory (knowing about a past event) …

arXiv AI Read →
◬ AI & Machine Learning May 12, 2026
What Will Happen Next: Large Models-Driven Deduction for Emergency Instances

arXiv:2605.08599v1 Announce Type: new Abstract: Traditional simulation methods reproduce occurred emergency instances through presetting to assist people in risk assessment and emergency decision-maki…

arXiv AI Read →
◬ AI & Machine Learning May 12, 2026
Biological Plausibility and Representational Alignment of Feedback Alignment in Convolutional Networks

arXiv:2605.08564v1 Announce Type: new Abstract: The feedback alignment (FA) algorithm offers a biologically plausible alternative to backpropagation (BP) for training neural networks yet notably fails…

arXiv AI Read →
◬ AI & Machine Learning May 12, 2026
Why Retrying Fails: Context Contamination in LLM Agent Pipelines

arXiv:2605.08563v1 Announce Type: new Abstract: When an LLM agent fails a multi-step tool-augmented task and retries, the failed attempt typically remains in its context window -- contaminating the ne…

arXiv AI Read →
← Prev 7 / 112 Next →