CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◇ Industry News & Leadership May 11, 2026

AI Researchers Target SIEM Migration Bottleneck

Data Breach Today Archived May 11, 2026 ✓ Full text saved

System Translates Detection Rules Across Security Platforms Researchers developed an AI framework that converts threat detection rules between major SIEM platforms including Splunk, Microsoft Sentinel and QRadar. The system uses LLMs and automated validation steps to preserve detection logic during migrations that often require months of manual work.

Full text archived locally
✦ AI Summary · Claude Sonnet


    Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , Security Information & Event Management (SIEM) AI Researchers Target SIEM Migration Bottleneck System Translates Detection Rules Across Security Platforms Rashmi Ramesh (rashmiramesh_) • May 11, 2026     Share Post Share Credit Eligible Get Permission Image: Lee Yiu Tung/Shutterstock Any time a company switches security monitoring software or absorbs a rival's IT infrastructure, its library of threat-detection rules probably will break. The new platform speaks a different language and building out rules by hand can take months. A research team from the National University of Singapore and Fudan University thinks an artificial intelligence agent can do it faster, with fewer mistakes. See Also: Why Every AI Decision Begins With DNS Researchers tested a system they dubbed ARuleCon on nearly 1,500 rule conversions spanning five platforms, including Splunk, Microsoft Sentinel, IBM QRadar, Google Chronicle and RSA NetWitness. These SIEMS ingest logs, correlate events and fire alerts when something looks wrong. Each vendor uses its own proprietary query language, and those languages differ in ways that go well beyond syntax. Moving a detection rule from one to another is not a find-and-replace exercise. ARuleCon tackles the problem in three stages. It reads a source rule and strips out all the platform-specific code, producing a plain-language description of what the rule is supposed to do - its filters, time windows, thresholds and grouping conditions. This is handed to a large language model, which drafts an equivalent rule in the target platform's language. Two automated checking agents refine the draft. One queries official vendor documentation to verify that operators and field names are correct for the destination platform. The other runs both the original and converted rules as Python code over synthetic log data to confirm they produce the same output. Mismatches trigger a repair loop. In benchmarking across three large language models - GPT-5, DeepSeek-V3 and LLaMA-3 - ARuleCon outperformed each model used alone by roughly 15% on average across structural, semantic and logical consistency measures. The improvements held regardless of which model was underneath, showing that the system's design is doing the work. Most conversions also ran without errors on the target platform, with rates above 90% in most cases, and near-perfect for Google Chronicle and Splunk. IBM QRadar and RSA NetWitness proved harder, partly because their documentation is less comprehensive and their grammar more complex. The researchers discussed where the system can fail. The Python-based consistency check, which confirms that the original and converted rules behave identically, runs over logs that the system itself generates, not the noisy, evolving data streams found in real security operations. "Our confidence is strongest for rules whose semantics can be well-covered by generated test cases, and weaker for rules involving rare behaviors, custom schemas or complex temporal correlations," one of the paper's authors, Ming Xu, told ISMG. The neutral template that underpins the whole system also has limits. It works well for most standard detection logic, but breaks down when platforms differ in how they execute rules. Rules that depend on stateful processing, vendor-specific data enrichment or behaviors that aren't written down in the rule itself can be problematic. The team recommends staged validation before deployment. The most critical steps are testing converted rules against historical logs and known attack traces, and running them in a monitoring-only mode before they go live. This validation workflow is currently offline, and the team flags it as future work. ARuleCon also treats vendor documentation as ground truth when refining its conversions, which introduces its own vulnerability. The system has limited ability to detect when documentation is wrong or incomplete, though they consider such cases rare since specifications are generally reliable. The design does allow for documentation to be updated. The process is not fast. A single conversion with GPT-5 takes around 140 seconds and uses roughly 10 times the computational resources of a direct language-model translation. ARuleCon is built for batch work such as platform migrations, rule onboarding and periodic maintenance, not real-time alerting. "Spending tens of seconds or even longer on a high-quality conversion can be acceptable, especially when compared with the manual effort required from detection engineers," Xu said. The source code has been released publicly via GitHub, and the team's industry partner NCS Group's Singtel Singapore is commercializing a prototype. "We view this as a key reason why ARuleCon should augment analysts rather than replace them," Xu said.
    💬 Team Notes
    Article Info
    Source
    Data Breach Today
    Category
    ◇ Industry News & Leadership
    Published
    May 11, 2026
    Archived
    May 11, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗