CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◇ Industry News & Leadership May 12, 2026

OpenAI Launches Daybreak for AI-Powered Vulnerability Detection and Patch Validation

The Hacker News Archived May 12, 2026 ✓ Full text saved

OpenAI has launched Daybreak, a new cybersecurity initiative that brings together frontier artificial intelligence (AI) model capabilities and Codex Security to help organizations identify and patch vulnerabilities before attackers find a way in using the same issues. "Daybreak combines the intelligence of OpenAI models, the extensibility of Codex as an agentic harness, and our partners across

Full text archived locally
✦ AI Summary · Claude Sonnet


    OpenAI Launches Daybreak for AI-Powered Vulnerability Detection and Patch Validation Ravie LakshmananMay 12, 2026Vulnerability / AI Security OpenAI has launched Daybreak, a new cybersecurity initiative that brings together frontier artificial intelligence (AI) model capabilities and Codex Security to help organizations identify and patch vulnerabilities before attackers find a way in using the same issues. "Daybreak combines the intelligence of OpenAI models, the extensibility of Codex as an agentic harness, and our partners across the security flywheel to help make the world safer for everyone," the AI upstart said. "Defenders can bring secure code review, threat modeling, patch validation, dependency risk analysis, detection, and remediation guidance into the everyday development loop so software becomes more resilient from the start." Like Anthropic's Mythos, the idea is to leverage AI to tilt the balance in favor of defenders and help detect and address security issues before they are found by bad actors. Access to the tooling remains tightly controlled for now, with OpenAI urging interested organizations to request for a vulnerability scan or contact its sales team. Daybreak leverages Codex Security to build an editable threat model for a given repository that focuses on realistic attack paths and high-impact code, identify and test vulnerabilities in an isolated environment, and propose fixes. The effort is built on the foundations of three models: GPT-5.5 (which has standard safeguards for general purpose use), GPT-5.5 with Trusted Access for Cyber (for verified defensive work in authorized environments), and GPT-5.5-Cyber (a permissive model for red teaming, penetration testing, and controlled validation). Several major companies like Akamai, Cisco, Cloudflare, CrowdStrike, Fortinet, Oracle, Palo Alto Networks, and Zscaler are already integrating these capabilities under the Trusted Access for Cyber initiative, OpenAI said, adding it's working with industry and government partners to deploy "more cyber-capable models" in the future. The rollout comes as AI tools have shortened the time it takes to discover latent security issues that may have otherwise escaped notice, turning what would once have taken a significant amount of time and effort into a much shorter period of work. As a result, the patching process can struggle to keep up even under ideal conditions.  Earlier this March, HackerOne paused its bug bounty program citing a shift in balance between vulnerability discoveries and the ability for open-source maintainers to address them, attributing it to how AI-assisted research has led to an uptick in the volume of new flaws and the speed at which they are identified. This also has had the side effect of what's called triage fatigue, where project maintainers are required to sift through a flood of vulnerability reports, some of which could be plausible-sounding but entirely hallucinated by the AI models. As AI lowers the barrier to finding security flaws, companies like Anthropic, Google, and OpenAI have increasingly positioned AI security agents as a new operational layer to address the remediation bottleneck and safeguard digital infrastructure from potential exploitation. In a post published last week, security researcher Himanshu Anand said "the 90 day disclosure policy is dead," as large language models (LLMs) compress disclosure and exploit timelines to near-zero. "When 10 unrelated researchers find the same bug in six weeks, and AI can turn a patch diff into a working exploit in 30 minutes, what exactly is the 90-day window protecting? Nobody," Anand said. Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post. SHARE     Tweet Share Share SHARE  AI Security, artificial intelligence, bug bounty, Cloud security, cybersecurity, OpenAI, Penetration Testing, software security, Threat Modeling, Vulnerability ⚡ Top Stories This Week Day Zero Readiness: The Operational Gaps That Break Incident Response Microsoft Details Phishing Campaign Targeting 35,000 Users Across 26 Countries Critical Apache HTTP/2 Flaw (CVE-2026-23918) Enables DoS and Potential RCE 30,000 Facebook Accounts Hacked via Google AppSheet Phishing Campaign Trellix Confirms Source Code Breach With Unauthorized Repository Access We Scanned 1 Million Exposed AI Services. Here's How Bad the Security Actually Is Progress Patches Critical MOVEit Automation Bug Enabling Authentication Bypass Quasar Linux RAT Steals Developer Credentials for Software Supply Chain Compromise ThreatsDay Bulletin: Edge Plaintext Passwords, ICS 0-Days, Patch-or-Die Alerts and 25+ New Stories ⚡ Weekly Recap: AI-Powered Phishing, Android Spying Tool, Linux Exploit, GitHub RCE and More PAN-OS RCE Exploit Under Active Use Enabling Root Access and Espionage The Hacker News Launches 'Cybersecurity Stars Awards 2026' — Submissions Now Open 2026: The Year of AI-Assisted Attacks Linux Kernel Dirty Frag LPE Exploit Enables Root Access Across Major Distributions New Linux PamDOORa Backdoor Uses PAM Modules to Steal SSH Credentials Palo Alto PAN-OS Flaw Under Active Exploitation Enables Remote Code Execution Load More ▼ ⭐ Featured Resources [Demo] Discover How to Control Autonomous Identity Risks Effectively [Guide] Get Practical AI SOC Insights to Improve Threat Detection [Demo] Stop Email Attacks and Protect Cloud Workspace Data Faster [Webinar] Learn How Autonomous Validation Keeps Pace With AI Attacks
    💬 Team Notes
    Article Info
    Source
    The Hacker News
    Category
    ◇ Industry News & Leadership
    Published
    May 12, 2026
    Archived
    May 12, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗