arXiv:2604.17125v1 Announce Type: new Abstract: Model Context Protocol (MCP) is a rapidly adopted standard for defining and invoking external tools in LLM applications. The multi-layered architecture …
cyberintel.kalymoon.com · 2688 articles · updated every 4 hours · grows forever
arXiv:2604.17125v1 Announce Type: new Abstract: Model Context Protocol (MCP) is a rapidly adopted standard for defining and invoking external tools in LLM applications. The multi-layered architecture …
arXiv:2604.17093v1 Announce Type: new Abstract: The integration of large language models (LLMs) into electronic design automation (EDA) workflows has introduced powerful capabilities for RTL generatio…
arXiv:2604.17014v1 Announce Type: new Abstract: Prior work has demonstrated that functionally correct yet vulnerable outputs arise systematically in threat-oriented settings, where adversarial or impl…
arXiv:2604.17003v1 Announce Type: new Abstract: Final FIPS and PKIX standards for ML-KEM and ML-DSA fix the normative floor, but operational assurance in post-quantum X.509 still depends on accountabl…
arXiv:2604.16966v1 Announce Type: new Abstract: The evolution from static ranking models to Agentic Recommender Systems (Agentic RecSys) empowers AI agents to maintain long-term user profiles and auto…
arXiv:2604.16870v1 Announce Type: new Abstract: AI agents increasingly call external tools (file system, network, APIs) through the Model Context Protocol (MCP). These tool calls are the agent's sysca…
arXiv:2604.16838v1 Announce Type: new Abstract: We present enclawed, a hard-fork hardening framework built on top of the OpenClaw single-user personal artificial intelligence (AI) assistant gateway. e…
arXiv:2604.16834v1 Announce Type: new Abstract: Privacy-preserving machine learning (PPML) has become increasingly important in applications where sensitive data must remain confidential. Homomorphic …
arXiv:2604.16832v1 Announce Type: new Abstract: Timing side-channel attacks exploit variations in program execution time to recover sensitive information. Cryptographic implementations are especially …
arXiv:2604.16827v1 Announce Type: new Abstract: Academic examination systems worldwide continue to rely on centralised, opaque record-keeping that is often vulnerable to credential forgery, result tam…
arXiv:2604.16824v1 Announce Type: new Abstract: Multi-turn jailbreak attacks progressively erode LLM safety alignment across seemingly innocuous conversation turns, achieving success rates exceeding 9…
arXiv:2604.16762v1 Announce Type: new Abstract: Modern AI agents routinely depend on secrets such as API keys and SSH credentials, yet the dominant deployment model still exposes those secrets directl…
arXiv:2604.16760v1 Announce Type: new Abstract: Ransomware detection systems increasingly rely on behavior-based machine learning to address evolving attack strategies. However, emerging privacy compl…
arXiv:2604.16699v1 Announce Type: new Abstract: As Cyber-Physical Systems (CPS) become increasingly pervasive and autonomous, ensuring the resilience of their embedded logic is critical to maintaining…
arXiv:2604.16697v1 Announce Type: new Abstract: Large language models write production code, and yet they routinely introduce well-known vulnerabilities. We show that this is not a knowledge deficit: …
arXiv:2604.16669v1 Announce Type: new Abstract: The modern cryptographic primitives are known to generate large volumes of sequential data like keystreams, ciphertext blocks, and hash outputs. Traditi…
arXiv:2604.16659v1 Announce Type: new Abstract: Prior work shows that fine-tuning aligned models on benign data degrades safety in text and vision modalities, and that proximity to harmful content in …
arXiv:2604.16606v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly deployed in high-stakes domains, yet a unified treatment of their overlapping safety challenges remains la…
arXiv:2604.16559v1 Announce Type: new Abstract: Light clients are essential for scalable blockchain systems because they verify data availability without downloading full blocks. In data availability …
arXiv:2604.16548v1 Announce Type: new Abstract: Research on large language model (LLM) security is shifting from "will the model leak training data" to a more consequential question: can an agent with…
arXiv:2604.16542v1 Announce Type: new Abstract: Safety guardrails have become an active area of research in AI safety, aimed at ensuring the appropriate behavior of large language models (LLMs). Howev…
arXiv:2604.16534v1 Announce Type: new Abstract: The communication protocols and data transfer mechanisms employed by IoT devices in smart buildings and corresponding digital twin systems predominantly…
arXiv:2604.16524v1 Announce Type: new Abstract: As autonomous AI agents increasingly call other agents to complete tasks on behalf of a human principal, a structural accountability gap has emerged: th…
arXiv:2604.16521v1 Announce Type: new Abstract: The deployment of Large Language Models in agentic, multi-turn conversational settings has introduced a class of privacy vulnerabilities that existing p…