CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ✉ Email Security Mar 30, 2026

Voice Phishing Statistics 2026: Startling Data - SQ Magazine

SQ Magazine Archived Mar 30, 2026 ✓ Full text saved

Voice Phishing Statistics 2026: Startling Data SQ Magazine

Full text archived locally
✦ AI Summary · Claude Sonnet


    Voice phishing, or vishing, has evolved from simple scam calls into a sophisticated fraud channel powered by AI and social engineering. Financial institutions now face impersonation attacks that mimic customer voices, while enterprises deal with fake IT calls that extract credentials in minutes. As phone-based scams merge with AI voice cloning and caller spoofing, the risks escalate across both consumers and businesses, making it critical to understand the latest data and trends. Let’s dive into the numbers shaping voice phishing today. Editor’s Choice Voice phishing incidents have increased significantly in recent years, with some cybersecurity reports indicating sharp growth driven by AI-enabled scams. The global average cost of a data breach was $4.44 million in 2025. 6.5% of employees admitted to sharing sensitive data during voice phishing calls. AI-driven scams increased by 1,210% in 2025, signaling a major shift toward automated fraud. Phishing and spoofing scams rose by 85.6% year-over-year in 2025, with losses doubling per incident. Recent Developments Over 3.8 million phishing attacks were recorded globally in 2025, reflecting sustained growth. Quarterly phishing activity surpassed 1 million attacks in Q1 2025, the highest since 2023. AI-powered phishing emails increased by 1,265% since late 2022, accelerating multi-channel scams. Vishing attacks using deepfake voices rose by 170% in a single quarter of 2025. AI impersonation scams grew 148% in 2025, spanning calls, video, and messaging platforms. Scam losses linked to phone-based fraud increased 16% in 2025, highlighting voice channels’ growing role. Generative AI tools now enable mass-scale personalized scams, reducing technical barriers for attackers. Voice Phishing Victim Rate Among Organizations Around 70% of organizations report being victims of voice phishing (vishing) attacks. This means 7 out of every 10 businesses have experienced at least one phone-based social engineering incident. The high 70% exposure rate highlights how voice phishing has become a mainstream cyber threat across industries. Such a large percentage indicates that traditional security measures alone are no longer sufficient against AI-driven and spoofed call attacks. The data underscores the urgent need for employee training, call verification protocols, and multi-factor authentication (MFA) to reduce risk. (Reference: Programs.com) Financial Losses from Voice Phishing Consumers reported $12.5 billion in fraud losses in 2024, a 25% increase YoY. Imposter scams, including voice phishing, accounted for $2.95 billion in losses in 2024. Vishing-related incidents cost organizations an average of $14 million annually. Median losses from phishing scams increased from $1,000 to $2,060 in 2025. Tech support scams, often involving voice calls, caused $924.5 million in U.S. losses in 2023. Global AI-driven fraud losses are projected to reach $40 billion by 2027. Over 10% of banks reported deepfake vishing losses exceeding $1 million per case. Financial losses from phishing grew from $18.7 million in 2023 to $70 million in 2024 in reported cases. Investment-related scams tied to phishing ecosystems generated $6.5 billion in losses in 2024. Demographics of Voice Phishing Victims Adults aged 60+ made up 20% of voice phishing victims in one 2023 dataset, showing older adults remain a major target group. In FTC data, 24% of older scam victims said the fraud started with a phone call, versus 10% of younger consumers. FBI IC3 reported people aged 60+ filed 147,127 cybercrime complaints and lost $4.8 billion in 2024, the highest loss total of any age group. Nearly 46% of call center scam victims were over 60, and they suffered 69% of reported losses in the FBI’s elder fraud data. About 6.5% of employees have shared sensitive information in successful vishing attacks. When Are Phishing Emails Most Commonly Sent? Sunday leads with 22%, making it the most common day for phishing emails to be sent. Friday accounts for 19%, showing a sharp rise in attacks heading into the weekend. Monday sees 15% of phishing emails, indicating attackers target users at the start of the workweek. Tuesday and Saturday each record 13%, reflecting moderate and consistent phishing activity. Wednesday drops slightly to 11%, suggesting a midweek dip in phishing campaigns. Thursday has the lowest share at just 7%, making it the least targeted day for phishing emails. Overall, phishing activity is highest around weekends, with Friday to Sunday accounting for over 50% of total attacks. (Reference: StationX) Industry Breakdown of Voice Phishing Financial services remained a leading target, with 44.1% of recorded vishing calls tied to the sector in one 2026 dataset. Customer support teams showed the highest vishing susceptibility at 11.5%, making service-heavy businesses a prime target. Technology-related support scams caused $924.5 million in US losses in 2023, up from $806.6 million in 2022. American consumers now receive 9.9 unwanted calls per week on average, or more than 500 per year. Unwanted calls have been growing at a 16% compounded annual rate since 2023 across surveyed markets. In FY 2025, the FTC received more than 2.6 million Do Not Call complaints, with many tied to robocalls and imposter calls. About 31.5% of phone companies had not installed the required anti-robocall software in one 2026 report. February 2026 robocalls averaged 136.8 million calls per day in the US, underscoring telecom’s role in spoofed-call delivery. Device and Channel Trends in Voice Phishing Mobile phones account for over 68% of voice phishing attacks, reflecting the shift toward personal devices. VoIP-based calls make up more than 60% of spoofed communications, enabling anonymity. Multi-channel phishing campaigns combining voice, SMS, and email increased by 97% in 2025. Robocalls reached over 50 billion calls annually in the U.S., many linked to scams. Messaging apps now support cross-channel vishing attacks, with attackers initiating contact via SMS before switching to calls. Corporate landlines still account for 25% of targeted enterprise vishing incidents, especially in large organizations. AI-powered voice bots can now handle thousands of simultaneous calls, scaling attacks significantly. Call-back phishing scams increased by 40% in 2025, where victims dial fake support numbers. Cloud telephony platforms enable attackers to rotate numbers instantly, reducing detection rates. Department-Level Voice Phishing Trends IT help desks are the top target in 42% of vishing attacks, due to their access to credentials. Finance departments experience over 30% of successful voice phishing breaches, often involving payment fraud. HR teams reported a 24% increase in impersonation calls, especially related to payroll or benefits scams. Customer support agents account for 35% of social engineering vulnerabilities, due to high call volume. Executives and senior managers are targeted in 28% of vishing campaigns, often through CEO fraud schemes. Remote workers faced a 46% higher risk of voice phishing attacks compared to in-office staff. Sales teams reported 18% of phishing incidents involving fake client calls requesting sensitive data. Legal departments saw a 12% rise in vishing attempts, often tied to contract or compliance impersonation. Internal audit teams flagged increased attack success rates in departments lacking call verification protocols. Caller Spoofing in Voice Phishing Caller ID spoofing is used in over 75% of vishing attacks, making calls appear legitimate. Billions of spoofed robocalls occur each month in the U.S. Implementation of call authentication protocols reduced spoofed calls by up to 30% in compliant networks, though gaps remain. Fraudsters often impersonate banks, with over 40% of spoofed calls mimicking financial institutions. Government impersonation via spoofed numbers increased by 31% in 2025, targeting tax and legal fears. Businesses reported a 22% rise in supplier impersonation calls, leading to invoice fraud. Spoofing attacks targeting customer support numbers increased by 45% year-over-year. Despite regulations, international spoofing bypasses local protections in over 50% of cases. Enterprises adopting call authentication tools saw a 28% drop in successful spoofing attacks. AI Voice Phishing AI-generated voice scams increased by over 1,200% in 2025, signaling rapid adoption. 74% of organizations reported AI-enhanced phishing attempts, including voice cloning. Deepfake voice technology can now replicate speech with less than 3 seconds of audio input. AI-driven scams achieve higher success rates due to personalization, increasing engagement by over 50%. Attackers use AI to automate real-time conversational responses, reducing suspicion during calls. Financial institutions reported a 32% increase in AI voice fraud attempts in 2025. AI voice cloning scams led to multi-million-dollar losses in several high-profile cases, including executive impersonation. Security teams note that AI phishing tools reduce attack setup time by over 60%, enabling faster campaigns. Nearly 1 in 3 businesses lack defenses against AI voice phishing, exposing a major security gap. Voice Phishing Awareness and Reporting Only 26% of people who lost money to an online scam reported it to law enforcement, showing that major underreporting persists. More than 54% of adults globally said they had been personally targeted by a scam or knew someone who had. 78% of adults globally said they feel educated about spotting scams, but only 13% were fully aware of available protections after being scammed. Users with recent phishing training reported suspicious messages at 21% versus a 5% base rate, a roughly fourfold increase. Nearly 44% of consumers replied to a suspicious message that contained no link, highlighting how social engineering still succeeds despite awareness. Among adults targeted by scams, 20% said they lost money as a result over the past two years. After being scammed, 97% of victims said the experience changed how they behave online or financially. Deepfake Voice Scam A well-known case involved scammers using AI voice cloning to steal $35 million from a UAE bank, highlighting enterprise risk. Deepfake audio can now achieve over 90% accuracy in mimicking real voices, making detection difficult. Financial institutions reported a 32% rise in deepfake-related fraud attempts in 2025. Fraudsters require as little as 3–10 seconds of audio to clone a voice convincingly. Over 60% of security leaders say deepfake scams are now a top concern, surpassing traditional phishing. Synthetic voice scams targeting family members increased by 45% in 2025, often posing as emergency calls. Enterprises reported that deepfake scams have higher success rates than email phishing, due to emotional manipulation. Voice Phishing Prevention Multi-factor authentication can block over 99% of automated attacks, including credential theft via vishing. Organizations using call verification protocols reduced vishing success rates by up to 46%. Employee training programs lower phishing susceptibility by up to 70% over 12 months. Caller authentication frameworks reduced spoofed calls by 30% in compliant networks. AI-based fraud detection tools can identify over 85% of suspicious voice patterns in real time. Companies implementing zero-trust security models saw a 50% drop in social engineering breaches. Blocking unknown or suspicious numbers reduces scam exposure by up to 25% for consumers. Financial institutions using voice biometrics reduced fraud losses by over 20% annually. Regular phishing simulations improve employee detection rates by over 60%, strengthening resilience. Frequently Asked Questions (FAQs) What percentage of organizations have experienced vishing attacks? How many Americans receive scam or vishing calls regularly? What is the average financial impact of voice phishing on organizations? Conclusion Voice phishing has shifted from opportunistic scams to highly targeted, AI-powered attacks that exploit trust, urgency, and human behavior. From deepfake voice cloning to multi-channel impersonation, the data shows a clear trend: attackers scale faster than traditional defenses. At the same time, organizations that invest in training, authentication, and detection tools consistently reduce risk and financial loss. As voice-based fraud continues to evolve, staying informed and proactive remains the most effective defense
    💬 Team Notes
    Article Info
    Source
    SQ Magazine
    Category
    ✉ Email Security
    Published
    Mar 30, 2026
    Archived
    Mar 30, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗