arXiv:2605.11202v1 Announce Type: new Abstract: LLM inference and serving systems have become security-critical infrastructure; however, many of their most concerning failures arise from the serving l…
cyberintel.kalymoon.com · 2686 articles · updated every 4 hours · grows forever
arXiv:2605.11202v1 Announce Type: new Abstract: LLM inference and serving systems have become security-critical infrastructure; however, many of their most concerning failures arise from the serving l…
arXiv:2605.11188v1 Announce Type: new Abstract: SQL injection (SQLi) attacks are still one of the serious attacks ranked in the Open Worldwide Application Security Project (OWASP) Top 10 threats. Toda…
arXiv:2605.11163v1 Announce Type: new Abstract: The irreversible nature of blockchain transactions makes the identification of smart contract vulnerabilities an essential requirement for secure system…
arXiv:2605.11122v1 Announce Type: new Abstract: Federated Learning remains highly susceptible to backdoor attacks--malicious clients inject targeted behaviours into the global model. Existing defenses…
arXiv:2605.11086v1 Announce Type: new Abstract: AI agents are rapidly gaining capabilities that could significantly reshape cybersecurity, making rigorous evaluation urgent. A critical capability is e…
arXiv:2605.11053v1 Announce Type: new Abstract: The Model Context Protocol (MCP) has become a widely adopted interface for LLM agents to invoke external tools, yet learned monitoring of MCP tool-call …
arXiv:2605.11047v1 Announce Type: new Abstract: Agentic language-model systems increasingly rely on mutable execution contexts, including files, memory, tools, skills, and auxiliary artifacts, creatin…
arXiv:2605.11040v1 Announce Type: new Abstract: Consumer unmanned aerial vehicles (UAVs) have evolved into capable computing platforms, yet their embedded firmware remains largely inaccessible to the …
arXiv:2605.11039v1 Announce Type: new Abstract: Tool-using LLM agents must act on untrusted webpages, emails, files, and API outputs while issuing privileged tool calls. Existing defenses often mediat…
arXiv:2605.11036v1 Announce Type: new Abstract: LLM-based agents act through sequences of executable decisions, but their trajectories provide little evidence of which agent or policy produced them, m…
arXiv:2605.11034v1 Announce Type: new Abstract: We present MambaNetBurst, a compact tokenizer-free byte-level sequence classifier for network burst classification based on a Mamba-2 backbone. In contr…
arXiv:2605.11032v1 Announce Type: new Abstract: We present Portable Agent Memory, an open protocol and reference implementation for transferring persistent memory state across heterogeneous AI agents.…
arXiv:2605.11029v1 Announce Type: new Abstract: An attacker can split a malicious goal into sub-prompts that each look benign on their own and only become harmful in combination. Existing LLM safety b…
arXiv:2605.11026v1 Announce Type: new Abstract: Defenses against indirect prompt injection (IPI) in tool-using LLM agents share two structural weaknesses. First, they all attempt to prevent attacks ra…
arXiv:2605.11015v1 Announce Type: new Abstract: Software vulnerability detection plays a critical role in ensuring system security, where real-world auditing requires not only determining whether a fu…
arXiv:2605.11003v1 Announce Type: new Abstract: This position paper argues that the Authorization-Execution Gap (AEG) is a major safety and security problem in open-world agents. The AEG is the diverg…
arXiv:2605.11002v1 Announce Type: new Abstract: Multi-turn jailbreaks exploit the ability of large language models to accumulate and act on conversational context. Instead of stating a harmful request…
arXiv:2605.10998v1 Announce Type: new Abstract: Fine-tuning APIs make frontier LLMs easy to customize, but they can also weaken safety alignment during fine-tuning. While prior work shows that benign …
arXiv:2605.10977v1 Announce Type: new Abstract: Watermarking for large language models (LLMs) is a promising approach for detecting LLM-generated text and enabling responsible deployment. However, exi…
AI Security Adds Defender Burden Faster Than Skills Catch Up Cybersecurity Insiders
arXiv:2605.08611v1 Announce Type: new Abstract: Current language model memory systems store what happened but not how it felt. This distinction -- between semantic memory (knowing about a past event) …
arXiv:2605.08599v1 Announce Type: new Abstract: Traditional simulation methods reproduce occurred emergency instances through presetting to assist people in risk assessment and emergency decision-maki…
arXiv:2605.08564v1 Announce Type: new Abstract: The feedback alignment (FA) algorithm offers a biologically plausible alternative to backpropagation (BP) for training neural networks yet notably fails…
arXiv:2605.08563v1 Announce Type: new Abstract: When an LLM agent fails a multi-step tool-augmented task and retries, the failed attempt typically remains in its context window -- contaminating the ne…