CyberIntel ⬡ News
★ Saved ◆ Cyber Reads

// Cyber
Intel Feed

cyberintel.kalymoon.com  ·  20591 articles  ·  updated every 4 hours · grows forever

20591Total
17966Full Text
May 17, 2026Latest
◈ Women in Cyber ◉ Threat Intelligence ◎ How-To & Tutorials ⬡ Vulnerabilities & CVEs 🔍 Digital Forensics ◍ Incident Response & DFIR ◆ Security Tools & Reviews ◇ Industry News & Leadership ✉ Email Security 🛡 Active Threats ⚠ Critical CVEs ◐ Insider Threat & DLP ◌ Quantum Computing ◬ AI & Machine Learning
🔥 Trending Topics · Last 48h
◬ AI & Machine Learning May 14, 2026
Model-Agnostic Lifelong LLM Safety via Externalized Attack-Defense Co-Evolution

arXiv:2605.13411v1 Announce Type: new Abstract: Large language models remain vulnerable to adversarial prompts that elicit harmful outputs. Existing safety paradigms typically couple red-teaming and p…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Inducing Overthink: Hierarchical Genetic Algorithm-based DoS Attack on Black-Box Large Language Reasoning Models

arXiv:2605.13338v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) are increasingly integrated into systems requiring reliable multi-step inference, yet this growing dependence exposes new …

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Context-Aware Web Attack Detection in Open-Source SIEM Systems via MITRE ATT&CK-Enriched Behavioral Profiling

arXiv:2605.13337v1 Announce Type: new Abstract: Security Information and Event Management (SIEM) systems aggregate log data from heterogeneous sources to detect coordinated attacks. Traditional rule-b…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Automatic Detection of Reference Counting Bugs in Linux Kernel Drivers

arXiv:2605.13246v1 Announce Type: new Abstract: Reference counting bugs in Linux kernel drivers can lead to severe resource mismanagement and security vulnerabilities. We introduce DrvHorn, a novel au…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Backdoor Channels Hidden in Latent Space: Cryptographic Undetectability in Modern Neural Networks

arXiv:2605.13214v1 Announce Type: new Abstract: Recent cryptographic results establish that neural networks can be backdoored such that no efficient algorithm can distinguish them from a clean model. …

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
LoREnc: Low-Rank Encryption for Securing Foundation Models and LoRA Adapters

arXiv:2605.13163v1 Announce Type: new Abstract: Foundation models and low-rank adapters enable efficient on-device generative AI but raise risks such as intellectual property leakage and model recover…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Empowering IoT Security: On-Device Intrusion Detection in Resource Constrained Devices

arXiv:2605.13159v1 Announce Type: new Abstract: IoT devices particularly microcontrollers are challenged by their inherent limitations in processing capabilities, memory capacity, and energy conservat…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Extending Blockchain Untraceability with Plausible Deniability

arXiv:2605.13132v1 Announce Type: new Abstract: Traditional blockchain untraceability schemes, such as mixers and privacy coins, obscure the sender-receiver relationship by placing transfers within an…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
DiffusionHijack: Supply-Chain PRNG Backdoor Attack on Diffusion Models and Quantum Random Number Defense

arXiv:2605.13115v1 Announce Type: new Abstract: Diffusion models depend on pseudo-random number generators (PRNGs) for latent noise sampling. We present DiffusionHijack, a supply-chain backdoor attack…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Security Incentivization: An Empirical Study of how Micropayments Impact Code Security

arXiv:2605.13100v1 Announce Type: new Abstract: Security often receives insufficient developer attention because it does not directly generate visible value, leading to underinvestment in practice. We…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Watermarking Should Be Treated as a Monitoring Primitive

arXiv:2605.13095v1 Announce Type: new Abstract: Watermarking is widely proposed for provenance, attribution, and safety monitoring in generative models, yet is typically evaluated only under adversari…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
No Attack Required: Semantic Fuzzing for Specification Violations in Agent Skills

arXiv:2605.13044v1 Announce Type: new Abstract: LLM-powered agents can silently delete documents, leak credentials, or transfer funds on a routine user request, not because the agent was attacked, but…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Insecure Despite Proven Updated: Extracting the Root VCEK Seed on EPYC Milan via a Software-Only Attack

arXiv:2605.12990v1 Announce Type: new Abstract: In the official whitepaper of Secure Encrypted Virtualization with Secure Nested Paging (SEV-SNP), AMD explicitly emphasizes the capability to prevent T…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
CLOUDBURST: Cloud-Layer Observations Using Beacons for Unified Real-time Surveillance and Threat Attribution

arXiv:2605.12976v1 Announce Type: new Abstract: Modern cloud-native environments present a fundamentally different exfiltration threat surface than traditional file-based scenarios. Attackers targetin…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
From Compression to Accountability: Harmless Copyright Protection for Dataset Distillation

arXiv:2605.12942v1 Announce Type: new Abstract: Large-scale datasets have been a key driving force behind the rapid progress of deep learning, but their storage, computational, and energy costs have b…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
ThermalTap: Passive Application Fingerprinting in VR Headsets via Thermal Side Channels

arXiv:2605.12927v1 Announce Type: new Abstract: Standalone virtual reality (VR) headsets process highly sensitive personal, professional, and health-related data, yet their susceptibility to non-conta…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Do Skill Descriptions Tell the Truth? Detecting Undisclosed Security Behaviors in Code-Backed LLM Skills

arXiv:2605.12875v1 Announce Type: new Abstract: Programmatic skills in LLM ecosystems consist of a natural-language description and executable implementation files. Users and LLMs rely on the descript…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Quantifying LLM Safety Degradation Under Repeated Attacks Using Survival Analysis

arXiv:2605.12869v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly deployed in a wide range of applications, yet remain vulnerable to adversarial jailbreak attacks that circ…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
HE-PIM: Demystifying Homomorphic Operations on a Real-world Processing-in-Memory System

arXiv:2605.12841v1 Announce Type: new Abstract: Homomorphic encryption (HE) enables computation over encrypted data, offering strong privacy guarantees for untrusted computing environments. Practical …

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
GraphIP-Bench: How Hard Is It to Steal a Graph Neural Network, and Can We Stop It?

arXiv:2605.12827v1 Announce Type: new Abstract: Graph neural networks (GNNs) deployed as cloud services can be \emph{stolen} through \emph{model-extraction attacks}, which train a surrogate from query…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
CoT-Guard: Small Models for Strong Monitoring

arXiv:2605.12746v1 Announce Type: new Abstract: Monitoring the chain-of-thought (CoT) of reasoning models is a promising approach for detecting covert misbehavior (i.e., hidden objectives) in code gen…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Still Camouflage, Moving Illusion: View-Induced Trajectory Manipulation in Autonomous Driving

arXiv:2605.12743v1 Announce Type: new Abstract: Existing physical adversarial attacks on vision-based autonomous driving induce time-evolving perception errors, including biased object tracking or tra…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Persona-Conditioned Adversarial Prompting (PCAP): Multi-Identity Red-Teaming for Enhanced Adversarial Prompt Discovery

arXiv:2605.12565v1 Announce Type: new Abstract: Existing automated red-teaming pipelines often miss attacks that depend on attacker identity, framing, or multi-turn tactics. This under-coverage undere…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
OverrideFuzz: Semantic-Aware Grammar Fuzzing for Script-Runtime Vulnerabilities

arXiv:2605.12563v1 Announce Type: new Abstract: Script-language runtimes such as Python, Lua, and JavaScript are widely deployed in security sensitive contexts, yet they remain difficult to test becau…

arXiv Security Read →
← Prev 32 / 858 Next →