CyberIntel ⬡ News
★ Saved ◆ Cyber Reads

// AI & Machine Learning
Intel Feed

cyberintel.kalymoon.com  ·  2686 articles  ·  updated every 4 hours · grows forever

2686Total
2643Full Text
May 16, 2026Latest
◈ Women in Cyber ◉ Threat Intelligence ◎ How-To & Tutorials ⬡ Vulnerabilities & CVEs 🔍 Digital Forensics ◍ Incident Response & DFIR ◆ Security Tools & Reviews ◇ Industry News & Leadership ✉ Email Security 🛡 Active Threats ⚠ Critical CVEs ◐ Insider Threat & DLP ◌ Quantum Computing ◬ AI & Machine Learning
🔥 Trending Topics · Last 48h
◬ AI & Machine Learning May 14, 2026
Learning Transferable Latent User Preferences for Human-Aligned Decision Making

arXiv:2605.12682v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly used as reasoning modules in many applications. While they are efficient in certain tasks, LLMs often stru…

arXiv AI Read →
◬ AI & Machine Learning May 14, 2026
Revealing Interpretable Failure Modes of VLMs

arXiv:2605.12674v1 Announce Type: new Abstract: Vision-Language Models (VLMs) are increasingly used in safety-critical applications because of their broad reasoning capabilities and ability to general…

arXiv AI Read →
◬ AI & Machine Learning May 14, 2026
Do Androids Dream of Breaking the Game? Systematically Auditing AI Agent Benchmarks with BenchJack

arXiv:2605.12673v1 Announce Type: new Abstract: Agent benchmarks have become the de facto measure of frontier AI competence, guiding model selection, investment, and deployment. However, reward hackin…

arXiv AI Read →
◬ AI & Machine Learning May 14, 2026
Macro-Action Based Multi-Agent Instruction Following through Value Cancellation

arXiv:2605.12655v1 Announce Type: new Abstract: Multi-agent reinforcement learning (MARL) in real-world use cases may need to adapt to external natural language instructions that interrupt ongoing beh…

arXiv AI Read →
◬ AI & Machine Learning May 14, 2026
Think Twice, Act Once: Verifier-Guided Action Selection For Embodied Agents

arXiv:2605.12620v1 Announce Type: new Abstract: Building generalist embodied agents capable of solving complex real-world tasks remains a fundamental challenge in AI. Multimodal Large Language Models …

arXiv AI Read →
◬ AI & Machine Learning May 14, 2026
EBCC: Enclave-Backed Confidential Containers via OCI-Compatible Runtime Integration

arXiv:2605.13676v1 Announce Type: new Abstract: Container runtimes provide a stable operational interface for deploying, monitoring, and controlling modern workloads, while trusted execution environme…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Limits of Personalizing Differential Privacy Budgets

arXiv:2605.13503v1 Announce Type: new Abstract: A key technical difficulty in differential privacy is selecting a privacy budget that satisfies privacy requirements while maximizing utility. A natural…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Phantom Force: Injecting Adversarial Tactile Perceptions into Embodied Intelligence via EMI

arXiv:2605.13492v1 Announce Type: new Abstract: Embodied intelligent robots rely on tactile sensors to interact with the physical world safely. While the security of visual perception systems has been…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Sleeper Channels and Provenance Gates: Persistent Prompt Injection in Always-on Autonomous AI Agents

arXiv:2605.13471v1 Announce Type: new Abstract: Always-on AI agents (OpenClaw, Hermes Agent) run as a single persistent process under the owner's identity, folding messaging, memory, self-authored ski…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Model-Agnostic Lifelong LLM Safety via Externalized Attack-Defense Co-Evolution

arXiv:2605.13411v1 Announce Type: new Abstract: Large language models remain vulnerable to adversarial prompts that elicit harmful outputs. Existing safety paradigms typically couple red-teaming and p…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Inducing Overthink: Hierarchical Genetic Algorithm-based DoS Attack on Black-Box Large Language Reasoning Models

arXiv:2605.13338v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) are increasingly integrated into systems requiring reliable multi-step inference, yet this growing dependence exposes new …

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Context-Aware Web Attack Detection in Open-Source SIEM Systems via MITRE ATT&CK-Enriched Behavioral Profiling

arXiv:2605.13337v1 Announce Type: new Abstract: Security Information and Event Management (SIEM) systems aggregate log data from heterogeneous sources to detect coordinated attacks. Traditional rule-b…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Automatic Detection of Reference Counting Bugs in Linux Kernel Drivers

arXiv:2605.13246v1 Announce Type: new Abstract: Reference counting bugs in Linux kernel drivers can lead to severe resource mismanagement and security vulnerabilities. We introduce DrvHorn, a novel au…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Backdoor Channels Hidden in Latent Space: Cryptographic Undetectability in Modern Neural Networks

arXiv:2605.13214v1 Announce Type: new Abstract: Recent cryptographic results establish that neural networks can be backdoored such that no efficient algorithm can distinguish them from a clean model. …

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
LoREnc: Low-Rank Encryption for Securing Foundation Models and LoRA Adapters

arXiv:2605.13163v1 Announce Type: new Abstract: Foundation models and low-rank adapters enable efficient on-device generative AI but raise risks such as intellectual property leakage and model recover…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Empowering IoT Security: On-Device Intrusion Detection in Resource Constrained Devices

arXiv:2605.13159v1 Announce Type: new Abstract: IoT devices particularly microcontrollers are challenged by their inherent limitations in processing capabilities, memory capacity, and energy conservat…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Extending Blockchain Untraceability with Plausible Deniability

arXiv:2605.13132v1 Announce Type: new Abstract: Traditional blockchain untraceability schemes, such as mixers and privacy coins, obscure the sender-receiver relationship by placing transfers within an…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
DiffusionHijack: Supply-Chain PRNG Backdoor Attack on Diffusion Models and Quantum Random Number Defense

arXiv:2605.13115v1 Announce Type: new Abstract: Diffusion models depend on pseudo-random number generators (PRNGs) for latent noise sampling. We present DiffusionHijack, a supply-chain backdoor attack…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Security Incentivization: An Empirical Study of how Micropayments Impact Code Security

arXiv:2605.13100v1 Announce Type: new Abstract: Security often receives insufficient developer attention because it does not directly generate visible value, leading to underinvestment in practice. We…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Watermarking Should Be Treated as a Monitoring Primitive

arXiv:2605.13095v1 Announce Type: new Abstract: Watermarking is widely proposed for provenance, attribution, and safety monitoring in generative models, yet is typically evaluated only under adversari…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
No Attack Required: Semantic Fuzzing for Specification Violations in Agent Skills

arXiv:2605.13044v1 Announce Type: new Abstract: LLM-powered agents can silently delete documents, leak credentials, or transfer funds on a routine user request, not because the agent was attacked, but…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
Insecure Despite Proven Updated: Extracting the Root VCEK Seed on EPYC Milan via a Software-Only Attack

arXiv:2605.12990v1 Announce Type: new Abstract: In the official whitepaper of Secure Encrypted Virtualization with Secure Nested Paging (SEV-SNP), AMD explicitly emphasizes the capability to prevent T…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
CLOUDBURST: Cloud-Layer Observations Using Beacons for Unified Real-time Surveillance and Threat Attribution

arXiv:2605.12976v1 Announce Type: new Abstract: Modern cloud-native environments present a fundamentally different exfiltration threat surface than traditional file-based scenarios. Attackers targetin…

arXiv Security Read →
◬ AI & Machine Learning May 14, 2026
From Compression to Accountability: Harmless Copyright Protection for Dataset Distillation

arXiv:2605.12942v1 Announce Type: new Abstract: Large-scale datasets have been a key driving force behind the rapid progress of deep learning, but their storage, computational, and energy costs have b…

arXiv Security Read →
← Prev 5 / 112 Next →