CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◉ Threat Intelligence

Why Take9 Won't Improve Cybersecurity - Dark Reading

Dark Reading Archived Mar 18, 2026 ✓ Full text saved

Why Take9 Won't Improve Cybersecurity Dark Reading

Full text archived locally
✦ AI Summary · Claude Sonnet


    CYBERSECURITY OPERATIONS COMMENTARY Why Take9 Won't Improve Cybersecurity The latest cybersecurity awareness campaign asks users to pause for nine seconds before clicking — but this approach misplaces responsibility and ignores the real problems of system design. Bruce Schneier,Arun Vishwanath May 28, 2025 7 Min Read SOURCE: FOTO-ZONE VIA ALAMY STOCK PHOTO COMMENTARY There's a new cybersecurity awareness campaign: Take9. The idea is that people — you, me, everyone — should just pause for nine seconds and think more about the link they are planning to click on, the file they are planning to download, or whatever it is they are planning to share. There's a website — of course — and a video, well-produced and scary. But the campaign won't do much to improve cybersecurity. The advice isn't reasonable, it won't make either individuals or nations appreciably safer, and it deflects blame from the real causes of our cyberspace insecurities. First, the advice is not realistic. A nine-second pause is an eternity in something as routine as using your computer or phone. Try it; use a timer. Then think about how many links you click on and how many things you forward or reply to. Are we pausing for nine seconds after every text message? Every Slack ping? Does the clock reset if someone replies midpause? What about browsing — do we pause before clicking each link, or after every page loads? The logistics quickly become impossible. I doubt they tested the idea on actual users. Related:Why Stryker's Outage Is a Disaster Recovery Wake-Up Call Second, it largely won't help. The industry should know because we tried it a decade ago. "Stop. Think. Connect." was an awareness campaign from 2016, by the Department of Homeland Security — this was before CISA — and the National Cybersecurity Alliance. The message was basically the same: Stop and think before doing anything online. It didn't work then, either. Take9's website says, "Science says: In stressful situations, wait 10 seconds before responding." The problem with that is that clicking on a link is not a stressful situation. It's normal, one that happens hundreds of times a day. Maybe you can train a person to count to 10 before punching someone in a bar but not before opening an attachment. And there is no basis in science for it. It's a folk belief, all over the Internet but with no actual research behind it — like the five-second rule when you drop food on the floor. In emotionally charged contexts, most people are already overwhelmed, cognitively taxed, and not functioning in a space where rational interruption works as neatly as this advice suggests.  Pausing Adds Little Pauses help us break habits. If we are clicking, sharing, linking, downloading, and connecting out of habit, a pause to break that habit works. But the problem here isn't habit alone. The problem is that people aren't able to differentiate between something legitimate and an attack. Related:White House Cyber Strategy Prioritizes Offense The Take9 website says that nine seconds is "time enough to make a better decision," but there's no use telling people to stop and think if they don't know what to think about after they've stopped. Pause for nine seconds and ... do what? Take9 offers no guidance. It presumes people have the cognitive tools to understand the myriad potential attacks and figure out which one of the thousands of Internet actions they take is harmful. If people don't have the right knowledge, pausing for longer — even a minute — will do nothing to add knowledge.  The three-part suspicion, cognition, and automaticity model (SCAM) is one way to think about this. The first is lack of knowledge — not knowing what's risky and what isn't. The second is habits: people doing what they always do. And third, using flawed mental shortcuts, like believing PDFs to be safer than Microsoft Word documents, or that mobile devices are safer than computers for opening suspicious emails. These pathways don't always occur in isolation; sometimes they happen together or sequentially. They can influence each other or cancel each other out. For example, a lack of knowledge can lead someone to rely on flawed mental shortcuts, while those same shortcuts can reinforce that lack of knowledge. That's why meaningful behavioral change requires more than just a pause; it needs cognitive scaffolding and system designs that account for these dynamic interactions.  Related:Software Development Practices Help Enterprises Tackle Real-Life Risks A successful awareness campaign would do more than tell people to pause. It would guide them through a two-step process. First trigger suspicion, motivating them to look more closely. Then, direct their attention by telling them what to look at and how to evaluate it. When both happen, the person is far more likely to make a better decision.  This means that pauses need to be context specific. Think about email readers that embed warnings like "EXTERNAL: This email is from an address outside your organization" or "You have not received an email from this person before." Those are specifics, and useful. We could imagine an AI plug-in that warns: "This isn't how Bruce normally writes." But of course, there's an arms race in play; the bad guys will use these systems to figure out how to bypass them. This is all hard. The old cues aren't there anymore. Current phishing attacks have evolved from those older Nigerian scams filled with grammar mistakes and typos. Text message, voice, or video scams are even harder to detect. There isn't enough context in a text message for the system to flag. In voice or video, it's much harder to trigger suspicion without disrupting the ongoing conversation. And all the false positives, when the system flags a legitimate conversation as a potential scam, work against people's own intuition. People will just start ignoring their own suspicions, just as most people ignore all sorts of warnings that their computer puts in their way. Even if we do this all well and correctly, we can't make people immune to social engineering. Recently, both cyberspace activist Cory Doctorow and security researcher Troy Hunt — two people who you'd expect to be excellent scam detectors — got phished. In both cases, it was just the right message at just the right time. It's even worse if you're a large organization. Security isn't based on the average employee's ability to detect a malicious email; it's based on the worst person's inability — the weakest link. Even if awareness raises the average, it won't help enough. Don't Place Blame Where It Doesn't Belong Finally, all of this is bad public policy. The Take9 campaign tells people that they can stop cyberattacks by taking a pause and making a better decision. What's not said, but certainly implied, is that if they don't take that pause and don't make those better decisions, then they're to blame when the attack occurs. That's simply not true, and its blame-the-user message is one of the worst mistakes our industry makes. Stop trying to fix the user. It's not the user's fault if they click on a link and it infects their system. It's not their fault if they plug in a strange USB drive or ignore a warning message that they can't understand. It's not even their fault if they get fooled by a look-alike bank website and lose their money. The problem is that we've designed these systems to be so insecure that regular, nontechnical people can't use them with confidence. We're using security awareness campaigns to cover up bad system design. Or, as security researcher Angela Sasse first said in 1999: "Users are not the enemy." We wouldn't accept that in other parts of our lives. Imagine Take9 in other contexts. Food service: "Before sitting down at a restaurant, take nine seconds: Look in the kitchen, maybe check the temperature of the cooler, or if the cooks' hands are clean." Aviation: "Before boarding a plane, take nine seconds: Look at the engine and cockpit, glance at the plane's maintenance log, ask the pilots if they feel rested." This is obviously ridiculous advice. The average person doesn't have the training or expertise to evaluate restaurant or aircraft safety — and we don't expect them to. We have laws and regulations in place that allow people to eat at a restaurant or board a plane without worry. But — we get it —the government isn't going to step in and regulate the Internet. These insecure systems are what we have. Security awareness training, and the blame-the-user mentality that comes with it, are all we have. So if we want meaningful behavioral change, it needs a lot more than just a pause. It needs cognitive scaffolding and system designs that account for all the dynamic interactions that go into a decision to click, download, or share. And that takes real work — more work than just an ad campaign and a slick video. Don't miss the latest Dark Reading Confidential podcast, The Day I Found an APT Group in the Most Unlikely Place, where threat hunters Ismael Valenzuela and Vitor Ventura share stories about the tricks they used to track down advanced persistent threats and the surprises they discovered along the way. Listen now! About the Authors Bruce Schneier Fellow & Lecturer, Harvard Kennedy School, and Chief of Security Technology, Inrupt, Inc. Bruce Schneier is a fellow and lecturer at the Harvard Kennedy School, and Chief of Security Technology at Inrupt, Inc. He can be found at www.schneier.com. Arun Vishwanath Technologist Arun Vishwanath, Ph.D., MBA, is among the foremost experts on the "people problem" of cybersecurity. He is the author of the book, The Weakest Link: How to Diagnose, Detect, and Defend Users from Phishing, published by MIT Press. His research on the science of cybersecurity focuses on the biggest vulnerability in enterprise security: users. His body of work includes the development of methodologies to quantify human cyber-risk, approaches to diagnose how and why people are at risk through social engineering, and techniques to mitigate this risk. Arun, an alumnus of the Berkman Klein Center at Harvard University, has held faculty positions at the University at Buffalo and Indiana University. He has authored close to 50 peer-reviewed research papers on the science of security and has written pieces for CNN, the Washington Post, and other leading media. His views on cybersecurity have also appeared in Wired Magazine and in reports such as the Verizon "Data Breach Investigations Report" (DBIR). You can read more about him on his website here. More Insights Industry Reports Frost Radar™: Non-human Identity Solutions 2026 CISO AI Risk Report The ROI of AI in Security Cybersecurity Forecast 2026 ThreatLabz 2025 Ransomware Report Access More Research Webinars Building a Robust SOC in a Post-AI World Retail Security: Protecting Customer Data and Payment Systems Rethinking SSE: When Unified SASE Delivers the Flexibility Enterprises Need Securing Remote and Hybrid Work Forecast: Beyond the VPN AI-Powered Threat Detection: Beyond Traditional Security Models More Webinars You May Also Like CYBERSECURITY OPERATIONS Women Who 'Hacked the Status Quo' Aim to Inspire Security Careers by Elizabeth Montalbano, Contributing Writer JUL 16, 2025 CYBERATTACKS & DATA BREACHES DeepSeek Breach Opens Floodgates to Dark Web by Emma Zaballos APR 22, 2025 CYBERSECURITY OPERATIONS Secure Communications Evolve Beyond End-to-End Encryption by Robert Lemos, Contributing Writer APR 04, 2025 CYBERSECURITY OPERATIONS Bridging the Gap Between the CISO & the Board of Directors by Michael Fanning MAR 31, 2025 Editor's Choice CYBERSECURITY OPERATIONS Why Stryker's Outage Is a Disaster Recovery Wake-Up Call byJai Vijayan MAR 12, 2026 5 MIN READ CYBER RISK What Orgs Can Learn From Olympics, World Cup IR Plans byTara Seals MAR 12, 2026 THREAT INTELLIGENCE Commercial Spyware Opponents Fear US Policy Shifting byRob Wright MAR 12, 2026 9 MIN READ Want more Dark Reading stories in your Google search results? 2026 Security Trends & Outlooks THREAT INTELLIGENCE Cybersecurity Predictions for 2026: Navigating the Future of Digital Threats JAN 2, 2026 CYBER RISK Navigating Privacy and Cybersecurity Laws in 2026 Will Prove Difficult JAN 12, 2026 ENDPOINT SECURITY CISOs Face a Tighter Insurance Market in 2026 JAN 5, 2026 THREAT INTELLIGENCE 2026: The Year Agentic AI Becomes the Attack-Surface Poster Child JAN 30, 2026 Download the Collection Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox. SUBSCRIBE Webinars Building a Robust SOC in a Post-AI World THURS, MARCH 19, 2026 AT 1PM EST Retail Security: Protecting Customer Data and Payment Systems THURS, APRIL 2, 2026 AT 1PM EST Rethinking SSE: When Unified SASE Delivers the Flexibility Enterprises Need WED, APRIL 1, 2026 AT 1PM EST Securing Remote and Hybrid Work Forecast: Beyond the VPN TUES, MARCH 10, 2026 AT 1PM EST AI-Powered Threat Detection: Beyond Traditional Security Models WED, MARCH 25, 2026 AT 1PM EST More Webinars White Papers Autonomous Pentesting at Machine Speed, Without False Positives Fixing Organizations' Identity Security Posture Best practices for incident response planning Industry Report: AI, SOC, and Modernizing Cybersecurity The Threat Prevention Buyer's Guide: Find the best AI-driven threat protection solution to stop file-based attacks. Explore More White Papers GISEC GLOBAL 2026 GISEC GLOBAL is the most influential and the largest cybersecurity gathering in the Middle East & Africa, uniting global CISOs, government leaders, technology buyers, and ethical hackers for three power-packed days of innovation, strategy, and live cyber drills. 📌 BOOK YOUR SPACE
    💬 Team Notes
    Article Info
    Source
    Dark Reading
    Category
    ◉ Threat Intelligence
    Published
    Archived
    Mar 18, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗