AI scams in 2026: phishing, fraud, and Webroot’s insights - Cybernews
CybernewsArchived May 15, 2026✓ Full text saved
AI scams in 2026: phishing, fraud, and Webroot’s insights Cybernews
Full text archived locally
✦ AI Summary· Claude Sonnet
Cybercrime is shifting in a noticeable way. Instead of relying primarily on technical exploits, attackers are increasingly focusing on manipulating user behavior, often with the help of AI.
According to Webroot, a cybersecurity company focused on endpoint and identity protection, one of the defining changes in 2026 is how accessible and scalable these attacks have become. AI tools allow threat actors to produce convincing messages, impersonate trusted brands, and automate campaigns at a level that was difficult to achieve just a few years ago.
We spoke with a Webroot representative to understand how scams are evolving, why financial attacks are increasing, and what users can realistically do to protect themselves.
ADVERTISEMENT
About Guillaume Pascual
Guillaume Pascual
Director of Product Marketing, Cybersecurity
Guillaume Pascual has spent over two decades working across the technology industry, with experience at Apple, Microsoft, and Norton. He currently leads product marketing for Webroot's consumer cybersecurity portfolio at OpenText, where his work focuses on making complex security topics accessible and relevant to real people. With AI-powered scams becoming one of the fastest-growing threats facing consumers, Guillaume is particularly interested in how clear product narrative and smart strategy can help everyday users stay ahead of increasingly sophisticated attacks.
Key takeaways:
AI is accelerating cybercrime. It enables highly convincing phishing and scams at scale, making threats harder to detect.
Financial scams are becoming more sophisticated. Attackers mimic real payment flows and trusted brands to reduce suspicion.
Phishing and traditional malware are on the rise. But the odds of encountering malicious web content, phishing, fake shopping, fake websites vs malware is significantly higher.
Scams are increasingly personalized. Voice cloning and AI-generated personas, also referred to as deepfakes,make attacks more convincing.
Security is shifting beyond malware detection. Solutions like Webroot focus on behavior, phishing prevention, and real-time threat analysis.
What’s changed in the online threat landscape this year?
AI has fundamentally changed the economics of cybercrime. Attackers can now generate high volumes of highly realistic phishing messages that closely imitate trusted brands and services, and they can do it at scale.
That shift is already visible in the data. While traditional malware delivered via attachments continues to decline, phishing has surged dramatically. Webroot points to roughly a 200% increase in email-based phishing between 2024 and 2025 alone.
Another key development is the intent behind these campaigns. Many attacks are now designed to enable direct financial theft or full account takeover, rather than simply collecting login credentials.
Why are financial scams so effective right now?
A big part of the effectiveness comes from how closely these scams resemble everyday digital interactions. Messages often replicate familiar workflows such as payment alerts, account restrictions, document-signature requests, or subscription updates, which encourages people to respond quickly without questioning the request.
ADVERTISEMENT
There’s also a growing trend of combining multiple trusted brands within a single attack. Webroot highlights campaigns that blend services like American Express, DocuSign, and ShareFile, sometimes hosted on legitimate cloud infrastructure. This layered approach builds credibility and makes interactions feel legitimate enough to lower suspicion.
What are the most common AI-driven scams affecting home users?
The variety of scams is expanding, but they share a common theme – personalization.
One of the more concerning examples is voice cloning. With just a few seconds of audio pulled from social media, attackers can replicate a person’s voice and stage convincing emergency calls. These situations often lead to immediate financial decisions, as people react emotionally rather than verifying the request.
There are also long-form investment scams, where attackers build relationships over time using AI-generated personas. Once trust is established, victims are guided toward fraudulent platforms that appear legitimate.
Phishing websites have also become significantly more convincing. Many are designed to closely mirror real brands, making it difficult for users to spot the difference. In the crypto space, scams are increasingly focused on wallet authorization, where users unknowingly grant access to their funds instead of having credentials stolen.
What psychological tactics are scammers relying on?
The underlying psychological triggers remain consistent, but their execution has become more refined.
Scams often rely on urgency, encouraging users to act quickly before verifying a request. Fear is another common factor, particularly in messages related to financial issues or suspicious activity. At the same time, attackers use trust and authority to make messages appear credible, often posing as colleagues, financial institutions, or internal teams.
Certain scams take the opposite approach: the scammer – often an AI chatbot – spends weeks building trust through casual conversation, then convinces the victim to invest in a fake crypto or trading scheme. These long‑con frauds are known as pig‑butchering scams, where victims are “fattened up” with attention before being exploited.
ADVERTISEMENT
With AI, these elements can be tailored more precisely and delivered at scale, which increases their effectiveness and makes them harder to detect.
Why don’t traditional antivirus tools always stop these scams?
Many modern scams don’t involve malicious files. Instead, the risk comes from interactions, such as clicking a link, entering credentials, or responding to a convincing message.
Traditional antivirus tools are primarily designed to detect known threats, which makes them less effective against phishing and social engineering. Addressing these risks requires additional layers, including analyzing URLs and web pages, filtering traffic, and focusing on identity and behavioral signals.
Over the past year, across the millions of devices running Webroot, an average of 1.7 malicious URL per user was detected and blocked. That scale of risk underscores why proactive protection matters.
How does Webroot approach detection and prevention differently?
Webroot’s approach is built around speed and adaptability. By relying on cloud intelligence, it can evaluate files and websites in real time rather than depending on locally stored threat databases.
At the same time, it monitors behavior on the device to identify suspicious activity as it happens. If something does slip through, rollback capabilities can help undo the damage and restore the system to a safe state.
Machine learning is also used to identify emerging threats, and the system is designed to remain lightweight, which helps maintain device performance without adding unnecessary overhead.
Visit Webroot Identity Theft Protection
ADVERTISEMENT
How is Webroot adapting to payment-platform fraud and brand abuse?
Webroot is focusing on improving how quickly threats are identified and blocked before users interact with them. This includes strengthening real-time website filtering so fraudulent payment or banking pages can be stopped before they load.
In parallel, more attention is being placed on detecting unusual account behavior, such as logins from unexpected locations or sudden changes in activity that could indicate fraud. The company is also adapting its anti-phishing capabilities to better detect AI-generated scam links, including those delivered through SMS messages and QR codes, which are becoming more common entry points.
What should users do if they suspect fraud or account takeover?
Responding quickly can significantly reduce the impact of an incident. Securing financial accounts should come first, which may include freezing or replacing cards if necessary.
Passwords should be updated immediately across affected services, and multi-factor authentication should be enabled wherever possible. It’s also important to contact the relevant platforms directly and consider identity recovery services if needed, especially in cases involving financial loss or personal data exposure.
What are the easiest protections for home users?
Simple habits still make a meaningful difference in reducing risk. Verifying requests before taking action is one of the most effective steps, particularly when money or account access is involved.
Using strong, unique passwords (ideally with a password manager), enabling multi-factor authentication, and keeping software updated all contribute to a more secure setup. Monitoring for data breaches and using secure connections on public Wi-Fi with a VPN can provide additional protection, but overall awareness and caution remain key factors.
In simple terms – why does Webroot’s approach work?
ADVERTISEMENT
Webroot combines cloud-based intelligence, real-time behavioral monitoring, and anti-phishing protection to identify and stop threats efficiently, while maintaining a lightweight system that does not impact performance.
The bigger picture: scams are becoming more human
Cybercrime is increasingly focused on manipulating user behavior through familiar brands, trusted platforms, and realistic communication. As these tactics scale with AI, even experienced users can struggle to spot scams.
This shift is changing what effective protection looks like. Many attacks rely on phishing and fake websites rather than traditional malware, which is why solutions like Webroot are focusing more on behavior, real-time analysis, and phishing prevention.
Verifying unexpected requests, especially those involving payments or account access, is now a critical habit, particularly when combined with tools designed to detect modern, AI-driven threats.
Visit Webroot Identity Theft Protection
Share
Post
Share
Share
Share