arXiv:2605.13411v1 Announce Type: new Abstract: Large language models remain vulnerable to adversarial prompts that elicit harmful outputs. Existing safety paradigms typically couple red-teaming and p…
cyberintel.kalymoon.com · 20591 articles · updated every 4 hours · grows forever
arXiv:2605.13411v1 Announce Type: new Abstract: Large language models remain vulnerable to adversarial prompts that elicit harmful outputs. Existing safety paradigms typically couple red-teaming and p…
arXiv:2605.13338v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) are increasingly integrated into systems requiring reliable multi-step inference, yet this growing dependence exposes new …
arXiv:2605.13337v1 Announce Type: new Abstract: Security Information and Event Management (SIEM) systems aggregate log data from heterogeneous sources to detect coordinated attacks. Traditional rule-b…
arXiv:2605.13246v1 Announce Type: new Abstract: Reference counting bugs in Linux kernel drivers can lead to severe resource mismanagement and security vulnerabilities. We introduce DrvHorn, a novel au…
arXiv:2605.13214v1 Announce Type: new Abstract: Recent cryptographic results establish that neural networks can be backdoored such that no efficient algorithm can distinguish them from a clean model. …
arXiv:2605.13163v1 Announce Type: new Abstract: Foundation models and low-rank adapters enable efficient on-device generative AI but raise risks such as intellectual property leakage and model recover…
arXiv:2605.13159v1 Announce Type: new Abstract: IoT devices particularly microcontrollers are challenged by their inherent limitations in processing capabilities, memory capacity, and energy conservat…
arXiv:2605.13132v1 Announce Type: new Abstract: Traditional blockchain untraceability schemes, such as mixers and privacy coins, obscure the sender-receiver relationship by placing transfers within an…
arXiv:2605.13115v1 Announce Type: new Abstract: Diffusion models depend on pseudo-random number generators (PRNGs) for latent noise sampling. We present DiffusionHijack, a supply-chain backdoor attack…
arXiv:2605.13100v1 Announce Type: new Abstract: Security often receives insufficient developer attention because it does not directly generate visible value, leading to underinvestment in practice. We…
arXiv:2605.13095v1 Announce Type: new Abstract: Watermarking is widely proposed for provenance, attribution, and safety monitoring in generative models, yet is typically evaluated only under adversari…
arXiv:2605.13044v1 Announce Type: new Abstract: LLM-powered agents can silently delete documents, leak credentials, or transfer funds on a routine user request, not because the agent was attacked, but…
arXiv:2605.12990v1 Announce Type: new Abstract: In the official whitepaper of Secure Encrypted Virtualization with Secure Nested Paging (SEV-SNP), AMD explicitly emphasizes the capability to prevent T…
arXiv:2605.12976v1 Announce Type: new Abstract: Modern cloud-native environments present a fundamentally different exfiltration threat surface than traditional file-based scenarios. Attackers targetin…
arXiv:2605.12942v1 Announce Type: new Abstract: Large-scale datasets have been a key driving force behind the rapid progress of deep learning, but their storage, computational, and energy costs have b…
arXiv:2605.12927v1 Announce Type: new Abstract: Standalone virtual reality (VR) headsets process highly sensitive personal, professional, and health-related data, yet their susceptibility to non-conta…
arXiv:2605.12875v1 Announce Type: new Abstract: Programmatic skills in LLM ecosystems consist of a natural-language description and executable implementation files. Users and LLMs rely on the descript…
arXiv:2605.12869v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly deployed in a wide range of applications, yet remain vulnerable to adversarial jailbreak attacks that circ…
arXiv:2605.12841v1 Announce Type: new Abstract: Homomorphic encryption (HE) enables computation over encrypted data, offering strong privacy guarantees for untrusted computing environments. Practical …
arXiv:2605.12827v1 Announce Type: new Abstract: Graph neural networks (GNNs) deployed as cloud services can be \emph{stolen} through \emph{model-extraction attacks}, which train a surrogate from query…
arXiv:2605.12746v1 Announce Type: new Abstract: Monitoring the chain-of-thought (CoT) of reasoning models is a promising approach for detecting covert misbehavior (i.e., hidden objectives) in code gen…
arXiv:2605.12743v1 Announce Type: new Abstract: Existing physical adversarial attacks on vision-based autonomous driving induce time-evolving perception errors, including biased object tracking or tra…
arXiv:2605.12565v1 Announce Type: new Abstract: Existing automated red-teaming pipelines often miss attacks that depend on attacker identity, framing, or multi-turn tactics. This under-coverage undere…
arXiv:2605.12563v1 Announce Type: new Abstract: Script-language runtimes such as Python, Lua, and JavaScript are widely deployed in security sensitive contexts, yet they remain difficult to test becau…