CyberIntel ⬡ News
★ Saved ◆ Cyber Reads

// AI & Machine Learning
Intel Feed

cyberintel.kalymoon.com  ·  2688 articles  ·  updated every 4 hours · grows forever

2688Total
2647Full Text
May 17, 2026Latest
◈ Women in Cyber ◉ Threat Intelligence ◎ How-To & Tutorials ⬡ Vulnerabilities & CVEs 🔍 Digital Forensics ◍ Incident Response & DFIR ◆ Security Tools & Reviews ◇ Industry News & Leadership ✉ Email Security 🛡 Active Threats ⚠ Critical CVEs ◐ Insider Threat & DLP ◌ Quantum Computing ◬ AI & Machine Learning
🔥 Trending Topics · Last 48h
◬ AI & Machine Learning Apr 21, 2026
CASCADE: A Cascaded Hybrid Defense Architecture for Prompt Injection Detection in MCP-Based Systems

arXiv:2604.17125v1 Announce Type: new Abstract: Model Context Protocol (MCP) is a rapidly adopted standard for defining and invoking external tools in LLM applications. The multi-layered architecture …

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
HarmChip: Evaluating Hardware Security Centric LLM Safety via Jailbreak Benchmarking

arXiv:2604.17093v1 Announce Type: new Abstract: The integration of large language models (LLMs) into electronic design automation (EDA) workflows has introduced powerful capabilities for RTL generatio…

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
False Security Confidence in Benign LLM Code Generation

arXiv:2604.17014v1 Announce Type: new Abstract: Prior work has demonstrated that functionally correct yet vulnerable outputs arise systematically in threat-oriented settings, where adversarial or impl…

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
From Public-Key Linting to Operational Post-Quantum X.509 Assurance for ML-KEM and ML-DSA: Registry-Driven Policy, Mutation-Based Evaluation, and Import Validation

arXiv:2604.17003v1 Announce Type: new Abstract: Final FIPS and PKIX standards for ML-KEM and ML-DSA fix the normative floor, but operational assurance in post-quantum X.509 still depends on accountabl…

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
Visual Inception: Compromising Long-term Planning in Agentic Recommenders via Multimodal Memory Poisoning

arXiv:2604.16966v1 Announce Type: new Abstract: The evolution from static ranking models to Agentic Recommender Systems (Agentic RecSys) empowers AI agents to maintain long-term user profiles and auto…

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
Governed MCP: Kernel-Level Tool Governance for AI Agents via Logit-Based Safety Primitives

arXiv:2604.16870v1 Announce Type: new Abstract: AI agents increasingly call external tools (file system, network, APIs) through the Model Context Protocol (MCP). These tool calls are the agent's sysca…

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
enclawed: A Configurable, Sector-Neutral Hardening Framework for Single-User AI Assistant Gateways

arXiv:2604.16838v1 Announce Type: new Abstract: We present enclawed, a hard-fork hardening framework built on top of the OpenClaw single-user personal artificial intelligence (AI) assistant gateway. e…

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
Towards Deep Encrypted Training: Low-Latency, Memory-Efficient, and High-Throughput Inference for Privacy-Preserving Neural Networks

arXiv:2604.16834v1 Announce Type: new Abstract: Privacy-preserving machine learning (PPML) has become increasingly important in applications where sensitive data must remain confidential. Homomorphic …

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
DALC-CT: Dynamic Analysis of Low-Level Code Traces for Constant-Time Verification

arXiv:2604.16832v1 Announce Type: new Abstract: Timing side-channel attacks exploit variations in program execution time to recover sensitive information. Cryptographic implementations are especially …

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
ParikkhaChain: Blockchain-Based Result Processing and Privacy-Preserving Academic Record Management for the Complete Examination Lifecycle

arXiv:2604.16827v1 Announce Type: new Abstract: Academic examination systems worldwide continue to rely on centralised, opaque record-keeping that is often vulnerable to credential forgery, result tam…

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
SafeDream: Safety World Model for Proactive Early Jailbreak Detection

arXiv:2604.16824v1 Announce Type: new Abstract: Multi-turn jailbreak attacks progressively erode LLM safety alignment across seemingly innocuous conversation turns, achieving success rates exceeding 9…

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
CapSeal: Capability-Sealed Secret Mediation for Secure Agent Execution

arXiv:2604.16762v1 Announce Type: new Abstract: Modern AI agents routinely depend on secrets such as API keys and SSH credentials, yet the dominant deployment model still exposes those secrets directl…

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
Privacy-Aware Machine Unlearning with SISA for Reinforcement Learning-Based Ransomware Detection

arXiv:2604.16760v1 Announce Type: new Abstract: Ransomware detection systems increasingly rely on behavior-based machine learning to address evolving attack strategies. However, emerging privacy compl…

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
Glitch in the Sky: Exploiting Voltage Fault Injection in UAV Flight Controllers

arXiv:2604.16699v1 Announce Type: new Abstract: As Cyber-Physical Systems (CPS) become increasingly pervasive and autonomous, ensuring the resilience of their embedded logic is critical to maintaining…

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
Surgical Repair of Insecure Code Generation in LLMs

arXiv:2604.16697v1 Announce Type: new Abstract: Large language models write production code, and yet they routinely introduce well-known vulnerabilities. We show that this is not a knowledge deficit: …

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
Stringology Based Cryptology

arXiv:2604.16669v1 Announce Type: new Abstract: The modern cryptographic primitives are known to generate large volumes of sequential data like keystreams, ciphertext blocks, and hash outputs. Traditi…

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
Benign Fine-Tuning Breaks Safety Alignment in Audio LLMs

arXiv:2604.16659v1 Announce Type: new Abstract: Prior work shows that fine-tuning aligned models on benign data degrades safety in text and vision modalities, and that proximity to harmful content in …

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
SafeLM: Unified Privacy-Aware Optimization for Trustworthy Federated Large Language Models

arXiv:2604.16606v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly deployed in high-stakes domains, yet a unified treatment of their overlapping safety challenges remains la…

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
Polynomial Multiproofs for Scalable Data Availability Sampling in Blockchain Light Clients

arXiv:2604.16559v1 Announce Type: new Abstract: Light clients are essential for scalable blockchain systems because they verify data availability without downloading full blocks. In data availability …

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
A Survey on the Security of Long-Term Memory in LLM Agents: Toward Mnemonic Sovereignty

arXiv:2604.16548v1 Announce Type: new Abstract: Research on large language model (LLM) security is shifting from "will the model leak training data" to a more consequential question: can an agent with…

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
TWGuard: A Case Study of LLM Safety Guardrails for Localized Linguistic Contexts

arXiv:2604.16542v1 Announce Type: new Abstract: Safety guardrails have become an active area of research in AI safety, aimed at ensuring the appropriate behavior of large language models (LLMs). Howev…

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
Public and private blockchain for decentralized digital building twins and building automation system

arXiv:2604.16534v1 Announce Type: new Abstract: The communication protocols and data transfer mechanisms employed by IoT devices in smart buildings and corresponding digital twin systems predominantly…

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
Anumati: Proof of Adherence as a Formal Consent Model for Autonomous Agent Protocols

arXiv:2604.16524v1 Announce Type: new Abstract: As autonomous AI agents increasingly call other agents to complete tasks on behalf of a human principal, a structural accountability gap has emerged: th…

arXiv Security Read →
◬ AI & Machine Learning Apr 21, 2026
CAMP: Cumulative Agentic Masking and Pruning for Privacy Protection in Multi-Turn LLM Conversations

arXiv:2604.16521v1 Announce Type: new Abstract: The deployment of Large Language Models in agentic, multi-turn conversational settings has introduced a class of privacy vulnerabilities that existing p…

arXiv Security Read →
← Prev 31 / 112 Next →