Cybersecurity in a Race to Unmask a New Wave of AI-Borne Deepfakes - Dark Reading
Dark ReadingArchived Apr 21, 2026✓ Full text saved
Cybersecurity in a Race to Unmask a New Wave of AI-Borne Deepfakes Dark Reading
Full text archived locally
✦ AI Summary· Claude Sonnet
THREAT INTELLIGENCE
CYBER RISK
DATA PRIVACY
CYBERATTACKS & DATA BREACHES
NEWS
Cybersecurity in a Race to Unmask a New Wave of AI-Borne Deepfakes
Kevin Mandia, CEO of Mandiant at Google Cloud, calls for content "watermarks" as the industry braces for a barrage of mind-bending AI-generated fake audio and video traffic.
Kelly Jackson Higgins,Editor-in-Chief,Dark Reading
May 10, 2024
4 Min Read
SOURCE: ALFONSO FABIIO IOZZINO VIA ALAMY STOCK PHOTO
RSA CONFERENCE 2024 – San Francisco – Everyone's talking about deepfakes, but the majority of AI-generated synthetic media circulating today will seem quaint in comparison to the sophistication and volume of what's about to come.
Kevin Mandia, CEO of Mandiant at Google Cloud, says it's likely a matter of months before the next generation of more realistic and convincing deepfake audio and video become mass-produced with AI technology. "I don't think it's [deepfake content] been good enough yet," Mandia said here in an interview with Dark Reading. "We are right before the storm of synthetic media hitting, where it's really a mass manipulation of people's hearts and minds."
The election year is of course a factor in the expected boom in deepfakes. The relative good news is that to date, most audio and video deepfakes have been fairly simple to spot either by existing detection tools or savvy humans. Voice-identity security vendor Pindrop says it can ID and stop most phony audio clips, and many AI image-creation tools infamously fail to render realistic-looking human hands — some generating hands with nine fingers, for example — a dead giveaway of a phony image.
Security tools that detect synthetic media are just now hitting the industry, including that of Reality Defender, a startup that detects AI-generated media, which was named the Most Innovative Startup of 2024 here this week in the RSA Conference Innovation Sandbox competition.
Source: Mandiant/Google Cloud
Mandia, who says he is an investor in a startup working on AI-generated content fraud detection called Real Factors, says the main way to stop deepfakes from fooling users and overshadowing real content is for content-makers to embed "watermarks." Microsoft Teams and Google Meet clients, for example, would be watermarked, he says, with immutable metadata, signed files, and digital certificates.
"You're going to see a huge uptick of this, at a time when privacy is being emphasized" as well, he notes. "Identity is going to get far better and provenance of sources will be far better," he says, to guarantee authenticity on each end.
"My thought is this watermark could reflect policies and profiles of risk that each company that creates content has," Mandia explains.
Mandia warns that the next wave of AI-generated audio and video will be especially tough to detect as phony. "What if you have a 10-minute video and two milliseconds of it are fake? Is the technology ever going to exist that's so good to say, 'That's fake'? We're going to have the infamous arms race, and defense loses in an arms race."
Making Cybercriminals Pay
Cyberattacks overall have become more costly financially and reputation-wise for victim organizations, Mandia says, so it's time to flip the equation and make it riskier for the threat actors themselves by doubling down on sharing attribution intel and naming names.
"We've actually gotten good at threat intelligence. But we're not good at the attribution of the threat intelligence," he says. The model of continuously putting the burden on organizations to build up their defenses is not working. "We're imposing cost on the wrong side of the hose," he says.
Mandia believes it's time to revisit treaties with the safe harbors of cybercriminals and to double down on calling out the individuals behind the keyboard and sharing attribution data in attacks. Take the sanctions against and naming of the leader of the prolific LockBit ransomware group by international law enforcement this week, he says. Officials in Australia, Europe, and the US teamed up and slapped sanctions on Russian national Dmitry Yuryevich, 31, of Voronezh, Russia, for his alleged role as ringleader of the cybercrime organization. They offered a $10 million reward for information on him and released his photo, a move that Mandia applauds as the right strategy for raising the risk for the bad guys.
"I think that does matter. If you're a criminal and all of a sudden the whole world has your photo, that's a problem for you. That's a deterrent and a far bigger deterrent than 'raising the cost' to an attacker," Mandia maintains.
Law enforcement, governments, and private industry need to revisit how to start identifying the cybercriminals effectively, he says, noting that a big challenge with unmasking is privacy and civil liberty laws in different countries. "We've got to start addressing this without impacting civil liberties," he says.
About the Author
Kelly Jackson Higgins
Editor-in-Chief, Dark Reading
Kelly Jackson Higgins is the Editor-in-Chief of Dark Reading and VP, cybersecurity editorial at Informa TechTarget, where she leads editorial strategy for the company's three cybersecurity media brands: Dark Reading, SearchSecurity and Cybersecurity Dive. She is an award-winning veteran technology and business journalist with three decades of experience in reporting and editing for various technology and business publications and major media properties. Jackson Higgins was selected three consecutive times as one of the Top 10 Cybersecurity Journalists in the U.S., and was named as one of Folio's 2019 Top Women in Media. She has been with Dark Reading since its launch in 2006.
Want more Dark Reading stories in your Google search results?
ADD US NOW
More Insights
Industry Reports
CISO Survey 2026: The State of Incident Response Readiness
AI SOC for MDR: The Structural Evolution of Managed Detection and Response
How Enterprises Are Developing Secure Applications
KuppingerCole Business Application Risk Management Leadership Compass
2026 CISO AI Risk Report
Access More Research
Webinars
Defending Against AI-Powered Attacks: The Evolution of Adversarial Machine Learning
Tips for Managing Cloud Security in a Hybrid Environment?
Zero Trust Architecture for Cloud environments: Implementation Roadmap
Security in the AI Age
Identity Maturity Under Pressure: 2026 Findings and How to Catch Up
More Webinars
You May Also Like
THREAT INTELLIGENCE
Hackers Target Cybersecurity Firm Outpost24 in 7-Stage Phish
by Jai Vijayan
MAR 17, 2026
THREAT INTELLIGENCE
Iran's Cyber-Kinetic War Doctrine Takes Shape
by Alexander Culafi
MAR 06, 2026
THREAT INTELLIGENCE
React2Shell Exploits Flood the Internet as Attacks Continue
by Rob Wright
DEC 12, 2025
THREAT INTELLIGENCE
Chinese Gov't Fronts Trick the West to Obtain Cyber Tech
by Nate Nelson, Contributing Writer
OCT 06, 2025
Editor's Choice
VULNERABILITIES & THREATS
EDR-Killer Ecosystem Expansion Requires Stronger BYOVD Defenses
byRob Wright
APR 14, 2026
8 MIN READ
СLOUD SECURITY
CSA: CISOs Should Prepare for Post-Mythos Exploit Storm
byAlexander Culafi
APR 13, 2026
6 MIN READ
СLOUD SECURITY
Navigating the Unique Security Risks of Asia's Digital Supply Chain
byAlexander Culafi
APR 15, 2026
3 MIN READ
Want more Dark Reading stories in your Google search results?
2026 Security Trends & Outlooks
THREAT INTELLIGENCE
Cybersecurity Predictions for 2026: Navigating the Future of Digital Threats
JAN 2, 2026
CYBER RISK
Navigating Privacy and Cybersecurity Laws in 2026 Will Prove Difficult
JAN 12, 2026
ENDPOINT SECURITY
CISOs Face a Tighter Insurance Market in 2026
JAN 5, 2026
THREAT INTELLIGENCE
2026: The Year Agentic AI Becomes the Attack-Surface Poster Child
JAN 30, 2026
Download the Collection
Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.
SUBSCRIBE
Webinars
Defending Against AI-Powered Attacks: The Evolution of Adversarial Machine Learning
MON, MAY 11, 2026 AT 1:00PM ET
Tips for Managing Cloud Security in a Hybrid Environment?
THURS, MAY 7, 2026 AT 1PM EST
Zero Trust Architecture for Cloud environments: Implementation Roadmap
TUES, MAY 12, 2026 AT 1PM EST
Security in the AI Age
TUES, APRIL 28, 2026 AT 1PM EST
Identity Maturity Under Pressure: 2026 Findings and How to Catch Up
WED, MAY 6,2026 AT 1PM EST
More Webinars
White Papers
How Sunrun Transformed Security Operations with AiStrike
Autonomous Pentesting at Machine Speed, Without False Positives
Best practices for incident response planning
Building a Robust SOC in a Post-AI World
Industry Report: AI, SOC, and Modernizing Cybersecurity
Explore More White Papers
BLACK HAT ASIA | MARINA BAY SANDS, SINGAPORE
Experience cutting-edge cybersecurity insights in this four-day event featuring expert Briefings on the latest research, Arsenal tool demos, a vibrant Business Hall, networking opportunities, and more. Use code DARKREADING for a Free Business Pass or $200 off a Briefings Pass.
GET YOUR PASS
GISEC GLOBAL 2026
GISEC GLOBAL is the most influential and the largest cybersecurity gathering in the Middle East & Africa, uniting global CISOs, government leaders, technology buyers, and ethical hackers for three power-packed days of innovation, strategy, and live cyber drills.
📌 BOOK YOUR SPACE