CyberIntel ⬡ News
★ Saved ◆ Cyber Reads

// Cyber
Intel Feed

cyberintel.kalymoon.com  ·  20670 articles  ·  updated every 4 hours · grows forever

20670Total
17999Full Text
May 17, 2026Latest
◈ Women in Cyber ◉ Threat Intelligence ◎ How-To & Tutorials ⬡ Vulnerabilities & CVEs 🔍 Digital Forensics ◍ Incident Response & DFIR ◆ Security Tools & Reviews ◇ Industry News & Leadership ✉ Email Security 🛡 Active Threats ⚠ Critical CVEs ◐ Insider Threat & DLP ◌ Quantum Computing ◬ AI & Machine Learning
🔥 Trending Topics · Last 48h
◬ AI & Machine Learning May 13, 2026
A Systematic Security Testing Approach for InterUSS-based environments

arXiv:2605.11339v1 Announce Type: new Abstract: Unmanned Traffic Management (UTM) federated ecosystems, such as InterUSS, enable secure coordination among UAS Service Suppliers (USSs). However, they b…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Context-Aware Spear Phishing: Generative AI-Enabled Attacks Against Individuals via Public Social Media Data

arXiv:2605.11268v1 Announce Type: new Abstract: We demonstrate how publicly available social-media data and generative AI (GenAI) can be misused to automate and scale highly personalized, context-awar…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Comment and Control: Hijacking Agentic Workflows via Context-Grounded Evolution

arXiv:2605.11229v1 Announce Type: new Abstract: Automation platforms such as GitHub Actions and n8n are increasingly adopting so-called agentic workflows, which integrate Large Language Model (LLM) ag…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Continuous Discovery of Vulnerabilities in LLM Serving Systems with Fuzzing

arXiv:2605.11202v1 Announce Type: new Abstract: LLM inference and serving systems have become security-critical infrastructure; however, many of their most concerning failures arise from the serving l…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Adversarial SQL Injection Generation with LLM-Based Architectures

arXiv:2605.11188v1 Announce Type: new Abstract: SQL injection (SQLi) attacks are still one of the serious attacks ranked in the Open Worldwide Application Security Project (OWASP) Top 10 threats. Toda…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Benchmarking LLM-Based Static Analysis for Secure Smart Contract Development: Reliability, Limitations, and Potential Hybrid Solutions

arXiv:2605.11163v1 Announce Type: new Abstract: The irreversible nature of blockchain transactions makes the identification of smart contract vulnerabilities an essential requirement for secure system…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
FedSurrogate: Backdoor Defense in Federated Learning via Layer Criticality and Surrogate Replacement

arXiv:2605.11122v1 Announce Type: new Abstract: Federated Learning remains highly susceptible to backdoor attacks--malicious clients inject targeted behaviours into the global model. Existing defenses…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
ExploitGym: Can AI Agents Turn Security Vulnerabilities into Real Attacks?

arXiv:2605.11086v1 Announce Type: new Abstract: AI agents are rapidly gaining capabilities that could significantly reshape cybersecurity, making rigorous evaluation urgent. A critical capability is e…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
MCPShield: Content-Aware Attack Detection for LLM Agent Tool-Call Traffic

arXiv:2605.11053v1 Announce Type: new Abstract: The Model Context Protocol (MCP) has become a widely adopted interface for LLM agents to invoke external tools, yet learned monitoring of MCP tool-call …

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Red-Teaming Agent Execution Contexts: Open-World Security Evaluation on OpenClaw

arXiv:2605.11047v1 Announce Type: new Abstract: Agentic language-model systems increasingly rely on mutable execution contexts, including files, memory, tools, skills, and auxiliary artifacts, creatin…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
A Multi-Interface Firmware Acquisition and Validation Methodology for Low-Cost Consumer Drones: A Case Study on Three Holy Stone Platforms

arXiv:2605.11040v1 Announce Type: new Abstract: Consumer unmanned aerial vehicles (UAVs) have evolved into capable computing platforms, yet their embedded firmware remains largely inaccessible to the …

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
The Granularity Mismatch in Agent Security: Argument-Level Provenance Solves Enforcement and Isolates the LLM Reasoning Bottleneck

arXiv:2605.11039v1 Announce Type: new Abstract: Tool-using LLM agents must act on untrusted webpages, emails, files, and API outputs while issuing privileged tool calls. Existing defenses often mediat…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Sequential Behavioral Watermarking for LLM Agents

arXiv:2605.11036v1 Announce Type: new Abstract: LLM-based agents act through sequences of executable decisions, but their trajectories provide little evidence of which agent or policy produced them, m…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
MambaNetBurst: Direct Byte-level Network Traffic Classification without Tokenization or Pretraining

arXiv:2605.11034v1 Announce Type: new Abstract: We present MambaNetBurst, a compact tokenizer-free byte-level sequence classifier for network burst classification based on a Mamba-2 backbone. In contr…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Portable Agent Memory: A Protocol for Cryptographically-Verified Memory Transfer Across Heterogeneous AI Agents

arXiv:2605.11032v1 Announce Type: new Abstract: We present Portable Agent Memory, an open protocol and reference implementation for transferring persistent memory state across heterogeneous AI agents.…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
FragBench: Cross-Session Attacks Hidden in Benign-Looking Fragments

arXiv:2605.11029v1 Announce Type: new Abstract: An attacker can split a malicious goal into sub-prompts that each look benign on their own and only become harmful in combination. Existing LLM safety b…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
AgentShield: Deception-based Compromise Detection for Tool-using LLM Agents

arXiv:2605.11026v1 Announce Type: new Abstract: Defenses against indirect prompt injection (IPI) in tool-using LLM agents share two structural weaknesses. First, they all attempt to prevent attacks ra…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
DCVD: Dual-Channel Cross-Modal Fusion for Joint Vulnerability Detection and Localization

arXiv:2605.11015v1 Announce Type: new Abstract: Software vulnerability detection plays a critical role in ensuring system security, where real-world auditing requires not only determining whether a fu…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
The Authorization-Execution Gap Is a Major Safety and Security Problem in Open-World Agents

arXiv:2605.11003v1 Announce Type: new Abstract: This position paper argues that the Authorization-Execution Gap (AEG) is a major safety and security problem in open-world agents. The AEG is the diverg…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
MT-JailBench: A Modular Benchmark for Understanding Multi-Turn Jailbreak Attacks

arXiv:2605.11002v1 Announce Type: new Abstract: Multi-turn jailbreaks exploit the ability of large language models to accumulate and act on conversational context. Instead of stating a harmful request…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Few-Shot Truly Benign DPO Attack for Jailbreaking LLMs

arXiv:2605.10998v1 Announce Type: new Abstract: Fine-tuning APIs make frontier LLMs easy to customize, but they can also weaken safety alignment during fine-tuning. While prior work shows that benign …

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
PASA: A Principled Embedding-Space Watermarking Approach for LLM-Generated Text under Semantic-Invariant Attacks

arXiv:2605.10977v1 Announce Type: new Abstract: Watermarking for large language models (LLMs) is a promising approach for detecting LLM-generated text and enabling responsible deployment. However, exi…

arXiv Security Read →
◆ Security Tools & Reviews May 13, 2026
Patch Tuesday - May 2026

Microsoft is publishing 137 vulnerabilities on May 2026 Patch Tuesday . Microsoft is not aware of exploitation in the wild or public disclosure for any of these vulnerabilities. So far this month, Mic…

Rapid7 Read →
◉ Threat Intelligence May 13, 2026
Defense at AI speed: Microsoft’s new multi-model agentic security system tops leading industry benchmark

Today Microsoft is announcing a major step forward in AI-powered cyber defense: a new multi-model agentic scanning harness (codenamed MDASH). The post Defense at AI speed: Microsoft’s new multi-model …

Microsoft Security Read →
← Prev 50 / 862 Next →