Deepfake Awareness High at Orgs, But Cyber Defenses Badly Lag - Dark Reading
Dark ReadingArchived Mar 28, 2026✓ Full text saved
Deepfake Awareness High at Orgs, But Cyber Defenses Badly Lag Dark Reading
Full text archived locally
✦ AI Summary· Claude Sonnet
CYBERSECURITY OPERATIONS
VULNERABILITIES & THREATS
THREAT INTELLIGENCE
CYBER RISK
NEWS
Deepfake Awareness High at Orgs, But Cyber Defenses Badly Lag
The vast majority of organizations are encountering AI-augmented threats, but remain confident in their defenses, despite inadequate detection investment and more than half falling to successful attacks.
Robert Lemos,Contributing Writer
October 10, 2025
5 Min Read
SOURCE: FAMILY STOCK VIA SHUTTERSTOCK
AI-augmented deepfakes are becoming more and more common in cyberattacks on businesses and government agencies, and most organizations are aware of the danger. However, there's a preparation paradox at work: most lag behind in investing in technical solutions for defending against deepfakes, experts say — even as they feel that they're ready for the onslaught.
On Oct. 7, AI giant OpenAI published research showing that a growing number of criminal and nation-state groups are using large language models (LLMs) to improve their attack workflows and create better phishing lures and malware. A second report, published by email security firm Ironscales on Oct. 9, found that these approaches seem to be working: Overall, the vast majority of midsized firms (85%) have seen attempts at deepfake and AI-voice fraud, and more than half (55%) suffered financial losses from such attacks, according to the survey-based report.
Related:How Organizations Can Use Mistakes to Level Up Their Security Programs
Most companies are taking the threat seriously, but are nonetheless struggling to keep up, say Eyal Benishti, CEO of Ironscales.
"The deepfake threat landscape looks, above all else, dynamic," he says. "While email threats and static imagery are still the most commonly encountered vectors, there is a wide diversity of other forms of deepfakes that are quickly growing in prevalence. In fact, we're seeing more and more of every kind of deepfake in the wild."
Attackers are using a variety of AI techniques to enhance their attack pipeline. Human digital twins can be trained on public information about a person to help create more realistic phishing attacks, which, combined with voice samples, could create convincing audio deepfakes. Concerns over misuse of AI caused Microsoft to mostly scuttle a voice cloning technology feature that it could have integrated into various apps, such as Teams, and allow a user — or an attacker — to hijack someone's voice for all kinds of fraud attempts.
AI-Generated Cyberattacks Proliferate
Attackers are already using such techniques, according to cybersecurity experts. The number of audio deepfakes encountered by businesses is on track to double in 2025, according to CrowdStrike's "2025 Threat Hunting Report." Currently, static deepfake images and AI-augmented business email compromise (BEC) attacks top the list of techniques encountered by businesses — with 59% of organizations encountering those techniques, according to the Ironscales report, which surveyed 500 US-based information-technology and cybersecurity professionals working at mid-sized companies with 1,000 to 10,000 employees.
Related:AI Dominates RSAC Innovation Sandbox
While phishing used to cast a wide, generalized net for every fish at once, now it's about using the exact bait needed for each individual fish, says April Lenhard, principal product manager for cyber threat intelligence at cybersecurity firm Qualys.
"Much like deepfake photos that easily blur the line between real and fake, AI-crafted emails are now indistinguishable from an email a real boss or family member would send, which makes them much more dangerous," she says.
The average company financially impacted by deepfake attacks lost $167,000 in the past 12 months, while the average loss is $280,000. Source: Ironscales
Various types of deepfake audio and video impersonations are also increasingly prevalent, with more than 40% of companies encountering those techniques in the past year, according to the Ironscales survey.
Companies are trying to keep up on the cybersecurity awareness front, with 88% providing deepfake-related training in the past year, up from 68% in 2024. While almost every cybersecurity professional expresses confidence in their company's ability to defend against a deepfake attack — and nearly three-quarters say they're "very confident" — the majority of organizations have been unsuccessful in fending off such attacks and have suffered financial losses, with the average victim losing an estimated $167,000 (this is an adjusted number from Dark Reading: Ironscales found an average loss of $280,000 using the mean, which is skewed by outsized losses, including 5% of surveyed companies that suffered in excess of $1 million in losses).
Related:AI-Native Security Is a Must to Counter AI-Based Attacks
Deepfake Defense Means Awareness Training, Good Processes & Tools
Organizations are taking the threat seriously, with 71% considering deepfake defense a top priority over the next year to 18 months, but most consider their current defenses enough to combat the threat: Two-thirds of organizations have not invested in defense against AI-augmented threats.
If attackers continue to adopt AI techniques, this imbalance between attackers and defenders could get worse, says Nicole Carignan, senior vice president of security and AI strategy at Darktrace, a cybersecurity platform provider.
"The challenge now is that AI can be used to reduce the skill barrier to entry and speed up production to a higher quality," she says. "Since the sophistication of deepfakes are getting harder to detect, it is imperative to turn to AI-augmented tools for detection, as people alone cannot be the last line of defense."
Companies should continue to train their employees and create good policies that reduce the impact that one person — even a top executive — can have for the company, says Ironscales' Benishti.
"Develop policies that make it impossible for a single employee's bad decision to result in compromise," he says. "For all wire transfers, invoice payments, and payroll matters, ensure new requests go through multiple levels of authorization, escalating with the size of the payment."
Finally, tools can help companies detect threats that might not otherwise be caught by even the most security-savvy employee, he says. By catching threats before they reach the workforce, they also help keep employees productive, rather than trying to spot increasingly convincing phishing attacks.
Read more about:
CISO Corner
About the Author
Robert Lemos
Contributing Writer
Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.
Want more Dark Reading stories in your Google search results?
ADD US NOW
More Insights
Industry Reports
Frost Radar™: Non-human Identity Solutions
2026 CISO AI Risk Report
Cybersecurity Forecast 2026
The ROI of AI in Security
ThreatLabz 2025 Ransomware Report
Access More Research
Webinars
Building a Robust SOC in a Post-AI World
Retail Security: Protecting Customer Data and Payment Systems
Rethinking SSE: When Unified SASE Delivers the Flexibility Enterprises Need
Securing Remote and Hybrid Work Forecast: Beyond the VPN
AI-Powered Threat Detection: Beyond Traditional Security Models
More Webinars
Editor's Choice
CYBERSECURITY OPERATIONS
Why Stryker's Outage Is a Disaster Recovery Wake-Up Call
byJai Vijayan
MAR 12, 2026
5 MIN READ
CYBER RISK
What Orgs Can Learn From Olympics, World Cup IR Plans
byTara Seals
MAR 12, 2026
THREAT INTELLIGENCE
Commercial Spyware Opponents Fear US Policy Shifting
byRob Wright
MAR 12, 2026
9 MIN READ
Want more Dark Reading stories in your Google search results?
2026 Security Trends & Outlooks
THREAT INTELLIGENCE
Cybersecurity Predictions for 2026: Navigating the Future of Digital Threats
JAN 2, 2026
CYBER RISK
Navigating Privacy and Cybersecurity Laws in 2026 Will Prove Difficult
JAN 12, 2026
ENDPOINT SECURITY
CISOs Face a Tighter Insurance Market in 2026
JAN 5, 2026
THREAT INTELLIGENCE
2026: The Year Agentic AI Becomes the Attack-Surface Poster Child
JAN 30, 2026
Download the Collection
Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.
SUBSCRIBE
Webinars
Building a Robust SOC in a Post-AI World
THURS, MARCH 19, 2026 AT 1PM EST
Retail Security: Protecting Customer Data and Payment Systems
THURS, APRIL 2, 2026 AT 1PM EST
Rethinking SSE: When Unified SASE Delivers the Flexibility Enterprises Need
WED, APRIL 1, 2026 AT 1PM EST
Securing Remote and Hybrid Work Forecast: Beyond the VPN
TUES, MARCH 10, 2026 AT 1PM EST
AI-Powered Threat Detection: Beyond Traditional Security Models
WED, MARCH 25, 2026 AT 1PM EST
More Webinars
White Papers
Autonomous Pentesting at Machine Speed, Without False Positives
Fixing Organizations' Identity Security Posture
Best practices for incident response planning
Industry Report: AI, SOC, and Modernizing Cybersecurity
The Threat Prevention Buyer's Guide: Find the best AI-driven threat protection solution to stop file-based attacks.
Explore More White Papers
GISEC GLOBAL 2026
GISEC GLOBAL is the most influential and the largest cybersecurity gathering in the Middle East & Africa, uniting global CISOs, government leaders, technology buyers, and ethical hackers for three power-packed days of innovation, strategy, and live cyber drills.
📌 BOOK YOUR SPACE