CyberIntel ⬡ News
★ Saved ◆ Cyber Reads

// AI & Machine Learning
Intel Feed

cyberintel.kalymoon.com  ·  2686 articles  ·  updated every 4 hours · grows forever

2686Total
2643Full Text
May 16, 2026Latest
◈ Women in Cyber ◉ Threat Intelligence ◎ How-To & Tutorials ⬡ Vulnerabilities & CVEs 🔍 Digital Forensics ◍ Incident Response & DFIR ◆ Security Tools & Reviews ◇ Industry News & Leadership ✉ Email Security 🛡 Active Threats ⚠ Critical CVEs ◐ Insider Threat & DLP ◌ Quantum Computing ◬ AI & Machine Learning
🔥 Trending Topics · Last 48h
◬ AI & Machine Learning May 14, 2026
ThermalTap: Passive Application Fingerprinting in VR Headsets via Thermal Side Channels

arXiv:2605.12927v1 Announce Type: new Abstract: Standalone virtual reality (VR) headsets process highly sensitive personal, professional, and health-related data, yet their susceptibility to non-conta…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Do Skill Descriptions Tell the Truth? Detecting Undisclosed Security Behaviors in Code-Backed LLM Skills

arXiv:2605.12875v1 Announce Type: new Abstract: Programmatic skills in LLM ecosystems consist of a natural-language description and executable implementation files. Users and LLMs rely on the descript…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Quantifying LLM Safety Degradation Under Repeated Attacks Using Survival Analysis

arXiv:2605.12869v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly deployed in a wide range of applications, yet remain vulnerable to adversarial jailbreak attacks that circ…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
HE-PIM: Demystifying Homomorphic Operations on a Real-world Processing-in-Memory System

arXiv:2605.12841v1 Announce Type: new Abstract: Homomorphic encryption (HE) enables computation over encrypted data, offering strong privacy guarantees for untrusted computing environments. Practical …

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
GraphIP-Bench: How Hard Is It to Steal a Graph Neural Network, and Can We Stop It?

arXiv:2605.12827v1 Announce Type: new Abstract: Graph neural networks (GNNs) deployed as cloud services can be \emph{stolen} through \emph{model-extraction attacks}, which train a surrogate from query…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
CoT-Guard: Small Models for Strong Monitoring

arXiv:2605.12746v1 Announce Type: new Abstract: Monitoring the chain-of-thought (CoT) of reasoning models is a promising approach for detecting covert misbehavior (i.e., hidden objectives) in code gen…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Still Camouflage, Moving Illusion: View-Induced Trajectory Manipulation in Autonomous Driving

arXiv:2605.12743v1 Announce Type: new Abstract: Existing physical adversarial attacks on vision-based autonomous driving induce time-evolving perception errors, including biased object tracking or tra…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Persona-Conditioned Adversarial Prompting (PCAP): Multi-Identity Red-Teaming for Enhanced Adversarial Prompt Discovery

arXiv:2605.12565v1 Announce Type: new Abstract: Existing automated red-teaming pipelines often miss attacks that depend on attacker identity, framing, or multi-turn tactics. This under-coverage undere…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
OverrideFuzz: Semantic-Aware Grammar Fuzzing for Script-Runtime Vulnerabilities

arXiv:2605.12563v1 Announce Type: new Abstract: Script-language runtimes such as Python, Lua, and JavaScript are widely deployed in security sensitive contexts, yet they remain difficult to test becau…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Ghost in the Context: Measuring Policy-Carriage Failures in Decision-Time Assembly

arXiv:2605.12535v1 Announce Type: new Abstract: LM agents do not act on raw interaction history; they act on a bounded decision state assembled by truncation, summarization, reordering, and rewriting.…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
BackFlush: Knowledge-Free Backdoor Detection and Elimination with Watermark Preservation in Large Language Models

arXiv:2605.12529v1 Announce Type: new Abstract: In recent trends, one can observe Large Language Models (LLMs) are exposed to backdoor attacks where vicious triggers added during training or model edi…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
What OpenClaw reveals about agentic AI security risks - IBM

What OpenClaw reveals about agentic AI security risks IBM

IBM Read →
◬ AI & Machine Learning May 13, 2026
AI chatbots are giving out people’s real phone numbers

People report that their personal contact info was surfaced by Google AI—and there’s apparently no easy way to prevent it. A Redditor recently wrote that he was “desperate for help”: for about a month…

MIT Tech Review AI Read →
◬ AI & Machine Learning May 13, 2026
Safety Context Injection: Inference-Time Safety Alignment via Static Filtering and Agentic Analysis

arXiv:2605.11664v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) improve performance on complex tasks, but they also make safety control harder at deployment time. In black-box settings, …

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Every Bit, Everywhere, All at Once: A Binomial Multibit LLM Watermark

arXiv:2605.11653v1 Announce Type: new Abstract: With LLM watermarking already being deployed commercially, practical applications increasingly require multibit watermarks that encode more complex payl…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
PhishSigma++: Malicious Email Detection with Typed Entity Relations

arXiv:2605.11619v1 Announce Type: new Abstract: Here is a further shortened version (pure text, no extra formatting, academic style preserved, no content change): Abstract. With the rise of AI-generat…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Convolutional-Neural-Networks for Deanonymisation of I2P Traffic

arXiv:2605.11606v1 Announce Type: new Abstract: This study investigates the potential for deanonymizing services within the Invisible Internet Project (I2P) network through passive traffic analysis an…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
FlowSteer: Prompt-Only Workflow Steering Exposes Planning-Time Vulnerabilities in Multi-Agent LLM Systems

arXiv:2605.11514v1 Announce Type: new Abstract: Multi-agent systems (MAS) powered by large language models (LLMs) increasingly adopt planner--executor architectures, where planners convert prompts int…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Digital Identity for Agentic Systems: Toward a Portable Authorization Standard for Autonomous Agents

arXiv:2605.11487v1 Announce Type: new Abstract: Enterprise AI is shifting from copilots to autonomous agents capable of executing workflows, negotiating outcomes, and making decisions with limited hum…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Can a Single Message Paralyze the AI Infrastructure? The Rise of AbO-DDoS Attacks through Targeted Mobius Injection

arXiv:2605.11442v1 Announce Type: new Abstract: Large Language Model (LLM) agents have emerged as key intermediaries, orchestrating complex interactions between human users and a wide range of digital…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Options, Not Clicks: Lattice Refinement for Consent-Driven MCP Authorization

arXiv:2605.11360v1 Announce Type: new Abstract: As Model Context Protocol adoption grows, securing tool invocations via meaningful user consent has become a critical challenge, as existing methods, br…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
A Systematic Security Testing Approach for InterUSS-based environments

arXiv:2605.11339v1 Announce Type: new Abstract: Unmanned Traffic Management (UTM) federated ecosystems, such as InterUSS, enable secure coordination among UAS Service Suppliers (USSs). However, they b…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Context-Aware Spear Phishing: Generative AI-Enabled Attacks Against Individuals via Public Social Media Data

arXiv:2605.11268v1 Announce Type: new Abstract: We demonstrate how publicly available social-media data and generative AI (GenAI) can be misused to automate and scale highly personalized, context-awar…

arXiv Security Read →
◬ AI & Machine Learning May 13, 2026
Comment and Control: Hijacking Agentic Workflows via Context-Grounded Evolution

arXiv:2605.11229v1 Announce Type: new Abstract: Automation platforms such as GitHub Actions and n8n are increasingly adopting so-called agentic workflows, which integrate Large Language Model (LLM) ag…

arXiv Security Read →
← Prev 6 / 112 Next →