arXiv:2605.12682v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly used as reasoning modules in many applications. While they are efficient in certain tasks, LLMs often stru…
cyberintel.kalymoon.com · 2686 articles · updated every 4 hours · grows forever
arXiv:2605.12682v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly used as reasoning modules in many applications. While they are efficient in certain tasks, LLMs often stru…
arXiv:2605.12674v1 Announce Type: new Abstract: Vision-Language Models (VLMs) are increasingly used in safety-critical applications because of their broad reasoning capabilities and ability to general…
arXiv:2605.12673v1 Announce Type: new Abstract: Agent benchmarks have become the de facto measure of frontier AI competence, guiding model selection, investment, and deployment. However, reward hackin…
arXiv:2605.12655v1 Announce Type: new Abstract: Multi-agent reinforcement learning (MARL) in real-world use cases may need to adapt to external natural language instructions that interrupt ongoing beh…
arXiv:2605.12620v1 Announce Type: new Abstract: Building generalist embodied agents capable of solving complex real-world tasks remains a fundamental challenge in AI. Multimodal Large Language Models …
arXiv:2605.13676v1 Announce Type: new Abstract: Container runtimes provide a stable operational interface for deploying, monitoring, and controlling modern workloads, while trusted execution environme…
arXiv:2605.13503v1 Announce Type: new Abstract: A key technical difficulty in differential privacy is selecting a privacy budget that satisfies privacy requirements while maximizing utility. A natural…
arXiv:2605.13492v1 Announce Type: new Abstract: Embodied intelligent robots rely on tactile sensors to interact with the physical world safely. While the security of visual perception systems has been…
arXiv:2605.13471v1 Announce Type: new Abstract: Always-on AI agents (OpenClaw, Hermes Agent) run as a single persistent process under the owner's identity, folding messaging, memory, self-authored ski…
arXiv:2605.13411v1 Announce Type: new Abstract: Large language models remain vulnerable to adversarial prompts that elicit harmful outputs. Existing safety paradigms typically couple red-teaming and p…
arXiv:2605.13338v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) are increasingly integrated into systems requiring reliable multi-step inference, yet this growing dependence exposes new …
arXiv:2605.13337v1 Announce Type: new Abstract: Security Information and Event Management (SIEM) systems aggregate log data from heterogeneous sources to detect coordinated attacks. Traditional rule-b…
arXiv:2605.13246v1 Announce Type: new Abstract: Reference counting bugs in Linux kernel drivers can lead to severe resource mismanagement and security vulnerabilities. We introduce DrvHorn, a novel au…
arXiv:2605.13214v1 Announce Type: new Abstract: Recent cryptographic results establish that neural networks can be backdoored such that no efficient algorithm can distinguish them from a clean model. …
arXiv:2605.13163v1 Announce Type: new Abstract: Foundation models and low-rank adapters enable efficient on-device generative AI but raise risks such as intellectual property leakage and model recover…
arXiv:2605.13159v1 Announce Type: new Abstract: IoT devices particularly microcontrollers are challenged by their inherent limitations in processing capabilities, memory capacity, and energy conservat…
arXiv:2605.13132v1 Announce Type: new Abstract: Traditional blockchain untraceability schemes, such as mixers and privacy coins, obscure the sender-receiver relationship by placing transfers within an…
arXiv:2605.13115v1 Announce Type: new Abstract: Diffusion models depend on pseudo-random number generators (PRNGs) for latent noise sampling. We present DiffusionHijack, a supply-chain backdoor attack…
arXiv:2605.13100v1 Announce Type: new Abstract: Security often receives insufficient developer attention because it does not directly generate visible value, leading to underinvestment in practice. We…
arXiv:2605.13095v1 Announce Type: new Abstract: Watermarking is widely proposed for provenance, attribution, and safety monitoring in generative models, yet is typically evaluated only under adversari…
arXiv:2605.13044v1 Announce Type: new Abstract: LLM-powered agents can silently delete documents, leak credentials, or transfer funds on a routine user request, not because the agent was attacked, but…
arXiv:2605.12990v1 Announce Type: new Abstract: In the official whitepaper of Secure Encrypted Virtualization with Secure Nested Paging (SEV-SNP), AMD explicitly emphasizes the capability to prevent T…
arXiv:2605.12976v1 Announce Type: new Abstract: Modern cloud-native environments present a fundamentally different exfiltration threat surface than traditional file-based scenarios. Attackers targetin…
arXiv:2605.12942v1 Announce Type: new Abstract: Large-scale datasets have been a key driving force behind the rapid progress of deep learning, but their storage, computational, and energy costs have b…