CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◇ Industry News & Leadership May 12, 2026

Tech Can't Stop These Threats — Your People Can

Dark Reading Archived May 12, 2026 ✓ Full text saved

Security controls can do only so much. Here are four attacks where your employees are usually your first, and only, line of cyber defense.

Full text archived locally
✦ AI Summary · Claude Sonnet


    CYBERATTACKS & DATA BREACHES CYBER RISK CYBERSECURITY OPERATIONS COMMENTARY Tech Can't Stop These Threats — Your People Can Security controls can do only so much. Here are four attacks where your employees are usually your first, and only, line of cyber defense. A. Stryker,Director of Threat Analysis,Fable Security May 11, 2026 8 Min Read SOURCE: LIGHTFIELD STUDIOS, INC. VIA ALAMY STOCK PHOTO OPINION I begin, as every strong article should, with a caveat: Technical security controls are critically important. Deploy them all — the SOAR playbooks, the SIEM log ingestions, the EDR clients — and use as many as you have budget and time and manpower to use. And, for the love of all that's secure, don't stop tuning them. However, those same technical controls can't stop a growing category of cyberattacks that are specifically engineered to evade or abuse real systems and trusted employees to do their dirty work. For these cases, your best (and sometimes only) defense isn't another dashboard or detection; it's an employee who knows what they're looking at and what they can do to stop it. A new report analyzing last quarter's human threat landscape found a total of 10 key cyber threats on track to outpace security control deployments. What struck me about these findings is not just how the attack worked, but how consistently the most effective countermeasure in each case came back to human behavior — not as a stopgap while better tech gets built, but as a genuinely irreplaceable compensating control. Related:ShinyHunters Claims Second Attack Against Instructure I pulled out four cyber trends that I thought were especially relevant in a quick retrospective, as security teams start to analyze exploited human behaviors and attack trends in the first quarter of 2026. BEC: The Social Engineering Attack Controls Can't Stop  Business email compromise is, statistically, the most efficient attack in the modern threat landscape. According to the 2025 "Microsoft Digital Defense Report," BEC attacks represented just 2% of attempted attacks last year, yet accounted for 21% of all successful ones. To compare, ransomware made up only 16% of successful attacks—despite receiving substantially more attention and security investment. Why is BEC so effective? Because it's a pure social engineering attack. There's no malware to detect, no malicious link to block, no payload to sandbox. The attacker tricks an authorized employee into intentionally moving money as part of "normal" business processes, or even coaching them to bypass a technical security control for an "urgent" payment. They'll pretend to be an impatient internal executive or a well-meaning external vendor, each with legitimate-sounding urgency. No EDR flags known-good business processes. No email security gateway catches all attempts, since some come in as voice-phishing phone calls outside of the inbox. These technical controls are working exactly as designed: from their perspective, nothing unusual happened. Related:Instructure Breach Exposes Schools' Vendor Dependence The human solution here isn't complicated, but it requires investment to actually work: make, train, and enforce policy adherence for out-of-band requests — and critically, don't punish employees who pump the brakes on a wire transfer request because it came from the CEO's email on a Friday afternoon. That pause is the control working. Treat it that way. Because when CrowdStrike's "2026 Global Threat Report" says that 83% of its incidents were caused by "malware-less" infections? We've got more attacks like this one coming. Shadow AI: The Data Violation No DLP Tool Sees Coming  Shadow AI — where employees connect unauthorized generative AI tools to work systems — was one of last quarter's top risk drivers across Fable customer environments. Those findings are supported by a growing body of research quantifying the true risk of uncontrolled, unauthorized AI tools at work. One survey, for example, found 51% of employees had connected unauthorized AI tools to work systems, and almost a third of those employees had uploaded proprietary financial information to said unmonitored AI tool. For this cyber-risk, the technical control challenge is structural. Data loss prevention (DLP) tools are trained to evaluate what content is, not whether it's appropriate to share in a given context. They're notoriously hard to tune, with an average 47% false positive rate that makes security teams reluctant to act aggressively on alerts. Related:Attacks Abuse Windows Phone Link to Steal Texts & Bypass 2FA Meanwhile, the employee uploading a contract summary to an unsanctioned AI tool isn't doing it maliciously; they're trying to do their job faster and just don't know better. The human layer here can accomplish two things technical controls can't: Data sensitivity labeling — employees who understand what's sensitive, and can actually classify it, enable the downstream controls to work better. Understanding which tools are sanctioned and why isn't a compliance checkbox; it's the decision point that happens before any DLP alert ever fires. This risk becomes even more pronounced with agentic AI tools, where autonomous systems act on sensitive data and inherit a previous user's permissions. In the past several months, we've heard tons of terrifying stories about what happens when people drive AI agents without guardrails. Two of the worst I've seen lately include a coding agent (allegedly) causing 13-hour service interruptions in AWS, and threat actors hacking password managers because a browsing AI agent ingested a malicious prompt. And that's just some of the horror stories that made it to press. Ask a CISO, SOC lead, or auditor in a conference hallway, and you'll hear more and worse shadow AI governance issues whispered about. Those whispers will only grow louder until we teach our human employees what they're giving to their AI agent sidekicks: What sensitive data looks like contextualized per individual, not tossed over the fence like we expect Sam in Customer Service to know what "sensitive data" means for him versus Dave the Developer. What access do AI agents have, both individually and cumulatively across data sets and applications? Is it read only or edit permissions? Natural language tools will always have natural language vulnerabilities to some degree, no matter what technical controls promise. Your employees are the only patch that can address both. MFA Bypass with Voice Phishing: Expensive Tech, Simple Human Fix In January 2026, the ShinyHunters threat group demonstrated a bypass technique that compromised authentication apps and tokens across more than 100 organizations. Researchers publishing about these attacks prior to attacker attribution noted, “There is no substitute for enforcing phishing resistance for access to resources," going on to list Yubikeys, an identity access manager of some sort, or passwordless solutions. But these solutions often are prohibitively expensive or take time to roll out. ShinyHunters, and anyone who buys that phishing kit off the Dark Web, are tricking people right now. Thankfully, the human control costs almost nothing to teach and immediately protects everywhere, even when the solution is unavailable or still being deployed. No legitimate IT or security team member will ever ask an employee for a one-time password or authentication code. Full stop. If someone asks, over email, over the phone, over Slack, in a ticket, by singing telegram, that's the attack. The employee who knows that and refuses is a more reliable control than the authentication layer the attacker just bypassed. This attack — and more and more like it — aren't cases where training can be your fallback checkbox solution. In fact, for a meaningful portion of your user base, it's your primary defense. The Quantum Distraction and What Attackers are Actually Doing Quantum computing gets a lot of airtime in security conversations as an impending threat to encryption. Now, "Q-Day" is definitely a long-term concern. If you're in the middle of government contract renewals or otherwise store sensitive data that nation-state level spies want, you'll want to invest. However, quantum decryption is almost certainly not your most pressing problem right now. You know what is a problem right now? Previously leaked data. Last year, 85% of targeted usernames in data incidents appeared in previous credential leaks. Why would attackers bother with breaking encryption to access data, when they can just log in as "real" users with bought credentials off the Dark Web from that password leak three years ago? Look, quantum-resistant cryptography is a real investment category. But teaching employees to use a password manager correctly — randomized, long, unique, updated after a breach — is a cheaper, faster, and more immediately impactful control for the attacks actually happening at scale right now. Don't let the futuristic threat crowd out the mundane one. Ultimately, for Dark Web credential leaks, shadow AI, and other cyber-risks evading technical controls, your employees can't stay your organization's last line of defense. They're the only line that was ever in a position to stop them from the start. Don't miss the latest Dark Reading Confidential podcast, How the Story of a USB Penetration Test Went Viral. Two decades ago Dark Reading posted its first blockbuster piece — a column by a pen tester who sprinkled rigged thumb drives around a credit union parking lot and let curious employees do the rest. This episode looks back at the history-making piece with its author, Steve Stasiukonis. Listen now! Read more about: Opinion About the Author A. Stryker Director of Threat Analysis, Fable Security A. Stryker is the Director of Threat Analysis at Fable Security, a human risk management platform. She specializes in analyzing the intersection of cyber threats and human risk, providing actionable intelligence on threat actor behaviors and social engineering. Stryker has spent over ten years translating technical research and qualitative intelligence into the "so what?" and "what now?" materials that keep more people safe and secure. She previously produced threat intelligence for financial services, cybersecurity vendors, and early-stage startups - including Ivanti, Blackpoint Cyber, and a major U.S. insurance company — and currently leads threat analysis at Fable Security. You can often find her playing tabletop exercise games after her talks at SecTor, DEF CON, and Bsides conferences around the United States. Stryker lives in Maryland, growing parsley for butterflies and algae for shrimp. Want more Dark Reading stories in your Google search results? ADD US NOW More Insights Industry Reports How Enterprises Are Developing Secure Applications Inside RSAC 2026: security leaders reveal the risks redefining your defense strategy How Enterprises Are Harnessing Emerging Technologies in Cybersecurity Ditch the Data Center: Understanding Flexible Cloud Infrastructure Security Management 2025 State of Malware Access More Research Webinars The New Attack Surface: How Attackers Are Exploiting OAuth to Own Your Cloud Workspace Prompt Injection Is Just the Start: Securing LLMs in AI Systems Anatomy of a Data Breach: What to Do if it Happens to You How Well Can You See What's in Your Cloud? Implementing CTEM: Beyond Vulnerability Management More Webinars You May Also Like CYBERATTACKS & DATA BREACHES Critical Fortinet Flaws Under Active Attack by Jai Vijayan, Contributing Writer DEC 17, 2025 CYBERATTACKS & DATA BREACHES CISA Warns of 'Ongoing' Brickstorm Backdoor Attacks by Rob Wright DEC 04, 2025 CYBERATTACKS & DATA BREACHES F5 BIG-IP Environment Breached by Nation-State Actor by Alexander Culafi OCT 15, 2025 CYBERATTACKS & DATA BREACHES Jaguar Land Rover Shows Cyberattacks Mean (Bad) Business by Robert Lemos, Contributing Writer OCT 03, 2025 Editor's Choice THREAT INTELLIGENCE From Stuxnet to ChatGPT: 20 News Events That Shaped Cyber byDark Reading Editorial Team MAY 6, 2026 31 MIN READ CYBER RISK Physical Cargo Theft Gets a Boost From Cybercriminals byRobert Lemos MAY 4, 2026 5 MIN READ CYBER RISK NSA Chief During Snowden Affair Shares Regrets, Reflections 13 Years Later byDark Reading Editorial Team APR 28, 2026 Want more Dark Reading stories in your Google search results? Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox. SUBSCRIBE RSAC 2026: key news & insights At RSAC 2026, Dark Reading captured critical intelligence on AI, new attack methods, geopolitics, and much more Get Your Recap Webinars The New Attack Surface: How Attackers Are Exploiting OAuth to Own Your Cloud Workspace WED, JUNE 24,2026 AT 1PM EST Prompt Injection Is Just the Start: Securing LLMs in AI Systems TUES, MAY 26, 2026, AT 1PM EST Anatomy of a Data Breach: What to Do if it Happens to You JUNE 18TH, 2026 | 11:00AM -5:00PM ET | DOORS OPEN AT 10:30AM ET How Well Can You See What's in Your Cloud? THURS, JUNE 4, 2026 AT 1:00PM EST Implementing CTEM: Beyond Vulnerability Management THURS, MAY 21, 2026 AT 1PM EST More Webinars BLACK HAT USA | MANDALAY BAY, LAS VEGAS The premier cybersecurity event of the year returns to Mandalay Bay with a re‑engineered, six‑day program built to ignite innovation, push boundaries, and bring the global security community together like never before. Use code: DARKREADING to save $200 on a Briefings pass or $100 on a Business pass. GET YOUR PASS
    💬 Team Notes
    Article Info
    Source
    Dark Reading
    Category
    ◇ Industry News & Leadership
    Published
    May 12, 2026
    Archived
    May 12, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗