CyberIntel ⬡ News
★ Saved ◆ Cyber Reads

// Cyber
Intel Feed

cyberintel.kalymoon.com  ·  20470 articles  ·  updated every 4 hours · grows forever

20470Total
17918Full Text
May 16, 2026Latest
◈ Women in Cyber ◉ Threat Intelligence ◎ How-To & Tutorials ⬡ Vulnerabilities & CVEs 🔍 Digital Forensics ◍ Incident Response & DFIR ◆ Security Tools & Reviews ◇ Industry News & Leadership ✉ Email Security 🛡 Active Threats ⚠ Critical CVEs ◐ Insider Threat & DLP ◌ Quantum Computing ◬ AI & Machine Learning
🔥 Trending Topics · Last 48h
◬ AI & Machine Learning May 15, 2026
MetaBackdoor: Exploiting Positional Encoding as a Backdoor Attack Surface in LLMs

arXiv:2605.15172v1 Announce Type: new Abstract: Backdoor attacks pose a serious security threat to large language models (LLMs), which are increasingly deployed as general-purpose assistants in safety…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
Talk is (Not) Cheap: A Taxonomy and Benchmark Coverage Audit for LLM Attacks

arXiv:2605.15118v1 Announce Type: new Abstract: We introduce a reusable framework for auditing whether LLM attack benchmarks collectively cover the threat surface: a 4$\times$6 Target $\times$ Techniq…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
PickleFuzzer: A Case Study in Fuzzing for Discrepancies Between Python Pickle Implementations

arXiv:2605.15084v1 Announce Type: new Abstract: Python's native serialization protocol, pickle, is a powerful but insecure format for transferring untrusted data. It is frequently used, especially for…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
Analyzing Codes of Conduct for Online Safety in Video Games at Scale

arXiv:2605.15047v1 Announce Type: new Abstract: Online video games have become major online social spaces where users interact, compete, and create together. These spaces, however, expose users to a w…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
WARD: Adversarially Robust Defense of Web Agents Against Prompt Injections

arXiv:2605.15030v1 Announce Type: new Abstract: Web agents can autonomously complete online tasks by interacting with websites, but their exposure to open web environments makes them vulnerable to pro…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
Toward Securing AI Agents Like Operating Systems

arXiv:2605.14932v1 Announce Type: new Abstract: Autonomous agents based on large language models (LLMs) are rapidly emerging as a general-purpose technology, with recent systems such as OpenClaw exten…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
Do Coding Agents Understand Least-Privilege Authorization?

arXiv:2605.14859v1 Announce Type: new Abstract: As coding agents gain access to shells, repositories, and user files, least-privilege authorization becomes a prerequisite for safe deployment: an agent…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
Known By Their Actions: Fingerprinting LLM Browser Agents via UI Traces

arXiv:2605.14786v1 Announce Type: new Abstract: As LLM-based agents increasingly browse the web on users' behalf, a natural question arises: can websites passively identify which underlying model powe…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
EVA: Editing for Versatile Alignment against Jailbreaks

arXiv:2605.14750v1 Announce Type: new Abstract: Large Language Models (LLMs) and Vision Language Models (VLMs) have demonstrated impressive capabilities but remain vulnerable to jailbreaking attacks, …

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
Adapting AlphaEvolve to Optimize Fully Homomorphic Encryption on TPUs

arXiv:2605.14718v1 Announce Type: new Abstract: The deployment of Fully Homomorphic Encryption (FHE) at scale is hindered due to its heavy computational overhead. While specialized hardware accelerato…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
Capacitive Touchscreens at Risk: A Practical Side-Channel Attack on Smartphones via Electromagnetic Emanations

arXiv:2605.14633v1 Announce Type: new Abstract: Capacitive touchscreens in modern smartphones introduce severe side-channel vulnerabilities. However, existing attacks often require restrictive conditi…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
One Step to the Side: Why Defenses Against Malicious Finetuning Fail Under Adaptive Adversaries

arXiv:2605.14605v1 Announce Type: new Abstract: Model providers increasingly release open weights or allow users to fine-tune foundation models through APIs. Although these models are safety-aligned b…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
Privacy Auditing with Zero (0) Training Run

arXiv:2605.14591v1 Announce Type: new Abstract: Privacy auditing provides empirical lower bounds on the differential privacy parameters of learning algorithms. Existing methods, however, require inter…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
Defenses at Odds: Measuring and Explaining Defense Conflicts in Large Language Models

arXiv:2605.14514v1 Announce Type: new Abstract: Large Language Models (LLMs) deployed in high-stakes applications must simultaneously manage multiple risks, yet existing defenses are almost exclusivel…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
Exploiting LLM Agent Supply Chains via Payload-less Skills

arXiv:2605.14460v1 Announce Type: new Abstract: Autonomous agents powered by Large Language Models (LLMs) acquire external functionalities through third-party skills available in open marketplaces. Ad…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
MemLineage: Lineage-Guided Enforcement for LLM Agent Memory

arXiv:2605.14421v1 Announce Type: new Abstract: We introduce MemLineage, a defense for LLM agent memory that attaches both cryptographic provenance and LLM-mediated derivation lineage to every entry. …

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
The Great Pretender: A Stochasticity Problem in LLM Jailbreak

arXiv:2605.14418v1 Announce Type: new Abstract: "Oh-Oh, yes, I'm the great pretender. Pretending that I'm doing well. My need is such, I pretend too much..." summarizes the state in the area of jailbr…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
Model Forensics in AI-Native Wireless Networks: Taxonomy, Applications, and Case Study

arXiv:2605.14387v1 Announce Type: new Abstract: As artificial intelligence (AI) is increasingly embedded in wireless networks, models are becoming core components that influence signal processing, res…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
To See is Not to Learn: Protecting Multimodal Data from Unauthorized Fine-Tuning of Large Vision-Language Model

arXiv:2605.14291v1 Announce Type: new Abstract: The rapid advancement of Large Vision-Language Models (LVLMs) is increasingly accompanied by unauthorized scraping and training on multimodal web data, …

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
Web Agents Should Adopt the Plan-Then-Execute Paradigm

arXiv:2605.14290v1 Announce Type: new Abstract: ReAct has become the default architecture across LLM agents, and many existing web agents follow this paradigm. We argue that it is the wrong default fo…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
On the (non-)resilience of encrypted controllers to covert attacks

arXiv:2605.14230v1 Announce Type: new Abstract: The security of networked control systems (NCS) is receiving increasing attention from both cyber-security and system-theoretic perspectives. The former…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
Characterizing AI-Assisted Bot Traffic in Darknet Data: Implications for ICS and IIoT Security

arXiv:2605.14209v1 Announce Type: new Abstract: The rise of automated scanning tools and AI assisted reconnaissance agents has significantly altered internet background traffic patterns, threatening t…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
DSTAN-Med: Dual-Channel Spatiotemporal Attention with Physiological Plausibility Filtering for False Data Injection Attack Detection in IoT-Based Medical Devices

arXiv:2605.14165v1 Announce Type: new Abstract: False data injection (FDI) attacks on Internet of Medical Things (IoMT) sensor streams falsify vital signs in transit, threatening patient safety and de…

arXiv Security Read →
◬ AI & Machine Learning May 15, 2026
ExploitBench: A Capability Ladder Benchmark for LLM Cybersecurity Agents

arXiv:2605.14153v1 Announce Type: new Abstract: Exploitation is not a binary event. It is a ladder of acquiring progressive capabilities, from executing a single buggy line of code to taking full cont…

arXiv Security Read →
← Prev 12 / 853 Next →