As More Coders Adopt AI Agents, Security Pitfalls Lurk in 2026 - Dark Reading
Dark ReadingArchived Mar 22, 2026✓ Full text saved
As More Coders Adopt AI Agents, Security Pitfalls Lurk in 2026 Dark Reading
Full text archived locally
✦ AI Summary· Claude Sonnet
APPLICATION SECURITY
CYBER RISK
CYBERSECURITY OPERATIONS
VULNERABILITIES & THREATS
News, news analysis, and commentary on the latest trends in cybersecurity technology.
As More Coders Adopt AI Agents, Security Pitfalls Lurk in 2026
Developers are leaning more heavily on AI for code generation, but in 2026 the development pipeline and security need to be prioritized.
Robert Lemos,Contributing Writer
December 26, 2025
5 Min Read
Claude Opus 4.5 Thinking gets top marks, but only slightly more than half of the code generated is correct and secure.SOURCE: BAXBENCH.COM
Software may be eating the world — to paraphrase one tech luminary — but in 2025, artificial intelligence (AI) ate software development. The vast majority of professional programmers now use large language models (LLMs) for code suggestions, debugging, and even vibe coding.
Yet challenges remain: Even as developers start to use AI agents to build applications and integrate AI services into the development and production pipeline, the quality of the code — especially the security of the code — varies significantly. Greenfield projects may see better productivity and security results than rewriting current code, especially if vulnerabilities in the older code are propagated. Some companies see few productivity gains, while others see significant benefits.
Software developers are moving faster, but depending on their knowledge and practices, they may not be producing secure code, says Chris Wysopal, chief security evangelist at application security firm Veracode.
Related:Rust Code Delivers Better Security, Also Streamlines DevOps
AI-assisted coding, refactoring, and architectural generation will dramatically increase code volume and complexity, so organizations will ship more software faster but with less human visibility, he explains.
In 2026, software developers should expect AI tools and agents will transform the development pipeline, from detecting bugs in code to triaging code defects and improving security, Wysopal says.
"The takeaway is you must have mature usage of the tools by your team," he says.
New Security for New AI Development
Already, developers have fully integrated AI code generation and analysis into their workflows. An October 2025 survey conducted by development tool maker JetBrains found that 85% of the nearly 25,000 surveyed developers regularly used AI tools for coding and software design work. A similar study conducted by Google found that 90% of software development professionals had adopted AI.
Yet security continues to be a problem. Currently, Anthropic's Claude Opus 4.5 Thinking LLM scores the top marks in the BaxBench benchmark created by a group of academic and industry researchers for measuring the security of generated code. Even so, the LLM produces secure and correct code only 56% of the time without any security prompting and 69% of the time when told to avoid known, specific vulnerabilities — an unrealistic caveat for real-world development, the researchers said.
Generating more code with the same frequency of vulnerabilities means more bugs that need to be fixed. Many development teams have to rework AI-generated code, which eats up 15 to 25 percentage points of the 30% to 40% productivity gains potentially achieved by AI-augmented developers, according to a Stanford University study.
Related:AI-Generated Code Poses Security, Bloat Challenges
Adding security tooling into the development pipeline — especially the parts where developers interact with AI systems — will be necessary in 2026. First up, developers using LLMs to produce code need to, at the very least, include standard prompts that prioritize security. Doing so often improves the likelihood of secure code: A generic security reminder resulted in secure and correct code 66% of the time versus 56% with no reminder for Claude Opus 4.5 Thinking (although a security reminder appears to have degraded the performance of OpenAI's GPT-5 because fewer proposed solutions were correct).
Adding more traditional tooling, such as static scanners, and newer AI-based security scanners can improve performance even more, but older scanners will not detect some newer AI-focused attacks, says Manoj Nair, chief innovation officer at secure development platform Snyk. The kinds of attacks emerging are a result of the lack of a security context, AI hallucinations, and the problems that arise with stochastic systems, he explains.
"[These AI systems] are not deterministic, they're probabilistic," Nair says. "That can be exploited in lots of different ways, and so it needs to be secured in a very different way."
Related:AI Conundrum: Why MCP Security Can't Be Patched Away
AI Everywhere
Development tool makers are inserting AI agents and features throughout their platforms, says Veracode's Wysopal. Properly configured, these AI agents will go beyond code generation to also catch insecure code and suggest secure alternatives automatically, enforce company-specified security policies, and block unsafe patterns before they reach the repository, he says.
Developers will have to learn how to securely interact with AI systems embedded in their integrated development environments, continuous integration pipelines, and code review workflows, Wysopal says.
"Developers need to treat AI-generated code as potentially vulnerable and follow a security testing and review process as they would for any human-generated code," Wysopal says. "They should have automated pipelines for testing and AI-generated code fixes."
One critical component is the model context protocol (MCP) servers that increasingly link LLMs and other AI systems to databases and corporate resources, making them a critical piece of the next-generation applications that need to be secured. Yet often the servers are left unsecured, as demonstrated by a July scan for MCP servers that discovered 1,862 connected to the public Internet, almost all without authentication.
Companies need to set policy in regard to these AI components of applications and services, says Snyk's Nair.
"Shadow agents are the new shadow IT. If you don't know what tools and what MCP servers are being used by the devs, then how are you going to secure them?" he says. "It's quite surprising what people are finding in terms of agentic blind spots. We've found MCP servers being built into codebases in highly regulated environments."
Don't Let AI be a Blind Spot
With AI components not only helping developers create applications but also becoming critical components of applications, new capabilities need to be established to help developers. Companies should move beyond software bills of materials to create AI bills of materials focusing on specific, vetted technologies and not allow developers to move outside of those, says Nair.
AI-coding platform Cursor, for example, just introduced a feature that allows developers to inspect the runtime state of their program using AI agents. The Debug Mode allows an agent to instrument the code, log the runtime output, and analyze the logs for a fix.
Other tool makers, such as Snyk, focus on integrating security checks at every step. Development teams that focus on security are more likely to benefit from the productivity of AI without the need to rework bad quality and insecure code, Nair says.
"Securely adopting these AI technologies from the ground up just changes the speed at which software [can be developed]," he says. "From the point you start building agents, you gain benefits, but that is also where there's a lot of work that has to be done" for security.
About the Author
Robert Lemos
Contributing Writer
Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.
Want more Dark Reading stories in your Google search results?
ADD US NOW
More Insights
Industry Reports
Frost Radar™: Non-human Identity Solutions
2026 CISO AI Risk Report
The ROI of AI in Security
Cybersecurity Forecast 2026
ThreatLabz 2025 Ransomware Report
Access More Research
Webinars
Building a Robust SOC in a Post-AI World
Retail Security: Protecting Customer Data and Payment Systems
Rethinking SSE: When Unified SASE Delivers the Flexibility Enterprises Need
Securing Remote and Hybrid Work Forecast: Beyond the VPN
AI-Powered Threat Detection: Beyond Traditional Security Models
More Webinars
You May Also Like
APPLICATION SECURITY
Trump Administration Rescinds Biden-Era Software Guidance
by Alexander Culafi
JAN 29, 2026
APPLICATION SECURITY
OWASP Highlights Supply Chain Risks in New Top 10 List
by Jai Vijayan, Contributing Writer
NOV 10, 2025
APPLICATION SECURITY
It Takes Only 250 Documents to Poison Any AI Model
by Jai Vijayan, Contributing Writer
OCT 22, 2025
CYBERATTACKS & DATA BREACHES
DeepSeek Breach Opens Floodgates to Dark Web
by Emma Zaballos
APR 22, 2025
Latest Articles in DR Technology
СLOUD SECURITY
Native Launches With Security Control Plane for Multicloud
MAR 19, 2026
СLOUD SECURITY
Post-Quantum Web Could be Safer, Faster
MAR 19, 2026
CYBER RISK
Why Post-Quantum Cryptography Can't Wait
MAR 12, 2026
IDENTITY & ACCESS MANAGEMENT SECURITY
Delinea's StrongDM Acquisition Highlights the Changing Role of PAM
MAR 12, 2026
Read More DR Technology