arXiv:2605.14004v1 Announce Type: new Abstract: Generative models are often trained with a next-token prediction objective, yet many downstream applications require the ability to estimate or control …
cyberintel.kalymoon.com · 2684 articles · updated every 4 hours · grows forever
arXiv:2605.14004v1 Announce Type: new Abstract: Generative models are often trained with a next-token prediction objective, yet many downstream applications require the ability to estimate or control …
Tenable Releases 2026 Cloud and AI Security Risk Report Highlighting Urgent Cybersecurity Threats Quiver Quantitative
arXiv:2605.14002v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) embedded in agentic frameworks have transformed information retrieval from static, long context question answering into op…
arXiv:2605.13880v1 Announce Type: new Abstract: Agent memory is typically constructed either offline from curated demonstrations or online from post-deployment interactions. However, regardless of how…
arXiv:2605.13851v1 Announce Type: new Abstract: Multi-agent orchestration -- in which a hidden coordinator manages specialized worker agents -- is becoming the default architecture for enterprise AI d…
arXiv:2605.13850v1 Announce Type: new Abstract: Existing frameworks for LLM-based agent architectures describe systems from a single perspective: industry guides (Anthropic, Google, LangChain) focus o…
arXiv:2605.13849v1 Announce Type: new Abstract: Determining what to eat to satisfy nutritional requirements is one of the oldest optimization problems in operations research, yet existing formulations…
arXiv:2605.13848v1 Announce Type: new Abstract: Agentic LLM frameworks that rely on prompted orchestration, where the model itself determines workflow transitions, often suffer from hallucinated routi…
arXiv:2605.14204v1 Announce Type: cross Abstract: Connected and autonomous vehicles and smart mobility services increasingly use digital route guidance as an operational input to traffic network manag…
arXiv:2605.14152v1 Announce Type: cross Abstract: Safety evaluations for large language models (LLMs) increasingly target high-stakes National Security and Public Safety (NSPS) risks, yet multilingual…
arXiv:2605.14032v1 Announce Type: cross Abstract: 5G networks provide low-latency, high throughput, and massive connectivity, yet the control plane remains exposed to several security threats. Among t…
arXiv:2605.15172v1 Announce Type: new Abstract: Backdoor attacks pose a serious security threat to large language models (LLMs), which are increasingly deployed as general-purpose assistants in safety…
arXiv:2605.15118v1 Announce Type: new Abstract: We introduce a reusable framework for auditing whether LLM attack benchmarks collectively cover the threat surface: a 4$\times$6 Target $\times$ Techniq…
arXiv:2605.15084v1 Announce Type: new Abstract: Python's native serialization protocol, pickle, is a powerful but insecure format for transferring untrusted data. It is frequently used, especially for…
arXiv:2605.15047v1 Announce Type: new Abstract: Online video games have become major online social spaces where users interact, compete, and create together. These spaces, however, expose users to a w…
arXiv:2605.15030v1 Announce Type: new Abstract: Web agents can autonomously complete online tasks by interacting with websites, but their exposure to open web environments makes them vulnerable to pro…
arXiv:2605.14932v1 Announce Type: new Abstract: Autonomous agents based on large language models (LLMs) are rapidly emerging as a general-purpose technology, with recent systems such as OpenClaw exten…
arXiv:2605.14859v1 Announce Type: new Abstract: As coding agents gain access to shells, repositories, and user files, least-privilege authorization becomes a prerequisite for safe deployment: an agent…
arXiv:2605.14786v1 Announce Type: new Abstract: As LLM-based agents increasingly browse the web on users' behalf, a natural question arises: can websites passively identify which underlying model powe…
arXiv:2605.14750v1 Announce Type: new Abstract: Large Language Models (LLMs) and Vision Language Models (VLMs) have demonstrated impressive capabilities but remain vulnerable to jailbreaking attacks, …
arXiv:2605.14718v1 Announce Type: new Abstract: The deployment of Fully Homomorphic Encryption (FHE) at scale is hindered due to its heavy computational overhead. While specialized hardware accelerato…
arXiv:2605.14633v1 Announce Type: new Abstract: Capacitive touchscreens in modern smartphones introduce severe side-channel vulnerabilities. However, existing attacks often require restrictive conditi…
arXiv:2605.14605v1 Announce Type: new Abstract: Model providers increasingly release open weights or allow users to fine-tune foundation models through APIs. Although these models are safety-aligned b…
arXiv:2605.14591v1 Announce Type: new Abstract: Privacy auditing provides empirical lower bounds on the differential privacy parameters of learning algorithms. Existing methods, however, require inter…