CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ✉ Email Security Apr 16, 2026

AI-Generated Phishing Attacks Surged by 14X - Manufacturing Business Technology

Manufacturing Business Technology Archived Apr 16, 2026 ✓ Full text saved

AI-Generated Phishing Attacks Surged by 14X Manufacturing Business Technology

Full text archived locally
✦ AI Summary · Claude Sonnet


    AI-Generated Phishing Attacks Surged by 14X Fighting AI means using AI. Mar 12, 2026 One might call the cyber threat forecast for the first 11 months of 2025, “Cloudy with a chance of Skynet.” Contrary to many headlines and hyped-up marketing campaigns, Hoxhunt analysis revealed that for most of 2025, under five percent of attacks on their four million users each month were AI-generated—until Christmas. In December, the forecast darkened into a thundercloud. Hoxhunt researchers uncovered a 14X surge in AI-generated phishing attacks that bypassed email filters and landed in inboxes, which soared from four percent to 56 percent of all reported attacks across the Hoxhunt global threat detection network over the holiday season. Hoxhunt recently unveiled their annual Phishing Trends Report, which is available here. Three findings particularly stand out in this year’s report: First, a late-2025 surge in AI-generated phishing attacks signals a new normal. For several years, AI-generated phishing represented only one to four percent of attacks detected in Hoxhunt’s network. But in December 2025, AI-generated phishing campaigns surged 14X, rising to 56 percent. The trend continued into 2026. Additionally, last month Bugcrowd’s Inside the Mind of a Hacker report found that 82 percent of hackers now use AI in their workflows, up from 64 percent in 2023, with AI primarily used for automating tasks, accelerating learning, and analyzing data. Second, phishing goes beyond email and into the calendar. Phishing campaigns using .ics calendar invites are surging, and they were found to be six times more likely to trick users into clicking compared to typical phishing attacks. Automatically appearing as meetings in users’ calendars, these invites are left behind like landmines even after a successful threat report of the calendar invite attack email, creating a second long-lasting opportunity for a malicious click. Third, recruitment scams are emerging as a fast-growing threat vector. Attackers are increasingly targeting sales, marketing, and social media teams with fake job opportunities impersonating major brands. These campaigns often attempt to hijack corporate social media or advertising accounts through credential harvesting attacks. The report also highlights how phishing techniques are evolving: 43 percent of AI-generated phishing emails contain malicious links. 20 percent are using open redirects to evade filters. 11 percent contain malicious attachments. 5 percent include malicious phone numbers tied to callback phishing. Despite the surge in AI-assisted attacks, the data also shows organizations can significantly reduce risk through behavior-focused training programs. Companies adopting adaptive security training saw a 6X improvement in phishing reporting within six months and an 87 percent reduction in malicious clicks. Some additional perspective on the findings  can be found below. Mika Aalto, co-Founder and CEO at Hoxhunt "Our research shows that AI-generated phishing went from a trickle to a flood almost overnight. The lesson for security leaders is clear: if attackers can use AI to scale social engineering, defenders must use AI to scale human cyber skills.  "The biggest mistake companies can make in the AI era is believing technology alone will solve social engineering. Attackers are targeting human behavior. That means the defense must strengthen human behavior as well. The advantage will go to whoever understands that technology is a lever, not a replacement, for influencing human psychology. "We’ve expected AI to reshape cybercrime for years, so the answer isn’t panic, it’s preparation. Right now there’s a wave of alarmist messaging around AI threats that almost resembles social engineering itself. Deepfakes are real, but they’re still rare and highly targeted. If companies focus training on exotic attacks instead of the common social engineering tactics people face every day, they’re not optimally managing human risk." Vincenzo Iozzo, CEO and Co-founder at SlashID "AI has dramatically amplified social engineering campaigns. Phishing emails that once required manual customization can now be generated at volume with convincing, context-aware language and much better conversion rate.  "Deepfakes, both audio and video, have been used in business email compromise and impersonation schemes, with several high-profile cases involving synthetic voice calls to authorize fraudulent wire transfers. AI-powered reconnaissance is also a growing concern: threat actors can use LLMs to rapidly profile and de-anonymize targets by synthesizing publicly available data from LinkedIn, corporate filings, and social media. "Increased visibility is critical to countering AI-enabled attacks for two distinct reasons. First, as AI tools proliferate within organizations gaining visibility into how these tools are being used internally becomes essential. This means monitoring permissions, tracking what data is being fed into AI systems, and understanding the intent behind prompts. Without this visibility, organizations face elevated risk from prompt injection attacks, as well as insider threats.  "Second, breakout times are steadily decreasing, in large part because of AI-assisted offensive operations. When adversaries can move from initial access to lateral movement in minutes rather than hours, defenders need more comprehensive telemetry across their environments to detect breaches before they escalate.  "The more data points an organization collects and correlates, the higher the probability of catching anomalous behavior in the shrinking window between compromise and impact." Ram Varadarajan, CEO at Acalvio "Threat actors are integrating AI as a force multiplier to accelerate reconnaissance and malware development  in using polymorphic code and synthetic identities to maintain resilience against traditional detection. We've now seen this across multiple documented attacks, from sources like Google, Amazon, and Anthropic. "Increased visibility is critical to countering AI-enabled attacks because it allows defenders to identify the subtle, machine-speed anomalies in behavior and identity that signify an AI-driven intrusion, effectively closing the verification gap created by automated tradecraft.  "AI can also be used to strengthen defenses by orchestrating game-theoretic deception -- deploying adaptive honeypots and "radiant" honeytokens that exploit a model's pattern-matching logic to misdirect and neutralize the attacker without human intervention. Our cybersecurity future is bot-on-bot.  Bot-on-human defender is a losing proposition." Krishna Vishnubhotla, VP of Product Strategy at Zimperium "AI makes phishing harder to detect. Scammers can quickly create polished emails, insert details from LinkedIn, and make them highly personal. It’s no longer just copying and pasting. Since they can tell the AI what words or patterns to avoid, many of the usual phishing filters don’t even trigger. "AI also makes phishing faster and more convincing. Scammers can generate dozens of polished, industry-specific emails in seconds and run simple tests to see what works. Personalization is easier now because AI can scan your public social profiles, job boards, and more in seconds, building highly accurate profiles, projects, or writing quirks." Rajeev Gupta, Co-Founder & CPO at Cowbell "Generative AI’s ability to interpret complex vulnerability data will be essential in building more accurate and responsive risk models. In the year ahead, cybersecurity best practices must evolve alongside AI adoption.  "Companies should verify AI tools, avoid inputting sensitive data into chatbots, and remain vigilant against increasingly sophisticated AI-generated phishing attacks. Building a culture of awareness, and implementing robust AI use policies, will be critical to mitigating these emerging risks." Diana Kelley, CISO at Noma Security "In 2025, many organizations grappled with the unintended consequences of rapid AI adoption. The big question was how to innovate without opening the organization up to unnecessary risk. We saw models downloaded from open-source repositories that contained malicious code or hidden backdoors, insecure API integrations exposing inference endpoints, and employees relying on generative AI tools without proper guardrails.  "As the Hoxhunt research highlights, attackers also honed convincing AI-generated phishing campaigns. I think the next wave of risk will stem from the broad adoption of agentic AI, systems that leverage the “reasoning” capabilities of LLMs to drive autonomous workflows. As these agents begin interfacing with enterprise data, APIs, and other agents, long-standing controls like IAM, PAM, and data segmentation will struggle to keep pace as trust boundaries blur.  "To prepare, organizations should implement agentic risk management, starting with established policies and standard operating procedures and supported by technical controls like cryptographic identity attestation and continuous policy enforcement for AI agents. This will allow enterprises to monitor and constrain agent autonomy to gain the benefits of agentic AI without putting the organization at unnecessary risk." Do you like this content? Subscribe to our newsletters to receive the latest information Subscribe to our newsletters Comments Post a Comment You must be signed in to leave a comment. To sign in or create an account, enter your email address and we'll send you a one-click sign-in link. Email Address * Continue This article hasn’t received any comments yet. Want to start the conversation? Latest in Cybersecurity AI-Native Decision Engine for Faster Zero-Day Detection April 15, 2026 Today and Tomorrow’s Manufacturing IT April 15, 2026 Modern SaaS Security: Hardening the Foundation for Industry Resilience and AI April 14, 2026 Cybersecurity Loading... Loading content... View All Companies > Related Stories Cybersecurity The Connected Threats of Deep Fakes Cybersecurity Update Redefines Secure Remote Access for OT Cybersecurity Siemens, Palo Alto Deliver AI-Driven Cybersecurity Solution
    💬 Team Notes
    Article Info
    Source
    Manufacturing Business Technology
    Category
    ✉ Email Security
    Published
    Apr 16, 2026
    Archived
    Apr 16, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗