Deepfake Fraud Statistics: How AI Is Changing Insurance Crime
The definitive collection of deepfake and AI fraud statistics relevant to insurance. Growth rates, detection rates, cost per incident, and projections.
This page collects statistics specifically related to AI-enabled and deepfake fraud in insurance and adjacent financial services. It complements our broader insurance fraud statistics 2026 page with data focused on the technology-driven fraud threat.
Every statistic includes its source. We update this page as new data is published.
Last updated: February 2026
Deepfake Growth and Prevalence
Volume Growth
- Deepfake content online has increased 900% year-on-year between 2020 and 2025 (World Economic Forum, 2025)
- Deepfake-related fraud attempts in financial services increased 3,000% between 2022 and 2025 (Sumsub Identity Fraud Report, 2025)
- The number of deepfake videos online exceeded 500,000 in 2024, up from approximately 15,000 in 2019 (Sensity AI)
- Deepfake generation tools available online grew from approximately 50 in 2020 to over 500 in 2025 (DeepMedia, 2025)
- Free deepfake creation tools capable of producing insurance-grade forgeries became widely available in 2023, eliminating cost barriers to fraud
Deepfakes in Financial Services
- 1 in 15 identity verification attempts in financial services now involves a deepfake or synthetic element (Jumio, 2025)
- Deepfake fraud losses in financial services (including insurance, banking, and fintech) reached an estimated USD $25 billion in 2024 (Deloitte Center for Financial Services)
- Projected deepfake fraud losses in financial services: USD $40 billion by 2027 (Deloitte)
- 76% of financial services firms surveyed reported encountering deepfake fraud attempts in 2024, up from 26% in 2022 (Regula Forensics, 2025)
- The average deepfake fraud incident in financial services costs USD $450,000 (Regula, 2024)
Deepfakes in Insurance Specifically
- An estimated 17% of insurers reported deepfake-related fraud attempts in 2024 (ACORD Technology in Insurance Survey, 2025)
- By 2025, this figure rose to an estimated 38%, though underreporting is significant
- Deepfake-related insurance fraud costs are estimated at USD $1.2 billion annually as of 2025 (industry estimate, multiple sources)
- The Insurance Council of Australia flagged AI-generated document fraud as the fastest-growing fraud category in 2025
- Motor, property, and workers’ compensation are the lines most affected by AI-generated document fraud in Australia
Deepfake Types and Techniques
Face Swaps and Video Deepfakes
- Face swap deepfakes account for 64% of all deepfake fraud attempts in identity verification (iProov, 2025)
- Video deepfakes (full face replacement in live or recorded video) account for 22%
- Quality of face swap deepfakes has improved to the point where human detection accuracy is below 50% — effectively random (University of Waterloo, 2024)
- Real-time face swap technology (applicable to live video calls) became commercially available in 2024, enabling live identity fraud during video-based claims verification
Voice Cloning and Audio Deepfakes
- Voice cloning accuracy has reached 95% speaker similarity with as little as 3 seconds of reference audio (Microsoft VALL-E research, 2024)
- Voice deepfake fraud in banking and insurance has increased 350% between 2023 and 2025 (Pindrop, 2025)
- A notable 2024 case involved a voice deepfake used to authorize a USD $25 million bank transfer in Hong Kong
- Insurance applications: fraudsters use voice cloning to impersonate policyholders during phone-based verification, claim authorisation calls, and recorded statements
Synthetic Documents
- AI-generated document fraud has increased 400% between 2022 and 2025 (Onfido, 2025)
- Documents that can be convincingly generated by AI include: identity documents, medical records, repair estimates and invoices, police reports, employment records, and financial statements
- The cost of generating a convincing fake document has dropped from approximately USD $5,000 (specialist forger, 2020) to under USD $50 (AI tools, 2025)
- Time to generate a fake document has dropped from days or weeks to under 5 minutes
Synthetic Identities
- Synthetic identity fraud (combining real and fabricated identity elements) costs US financial services an estimated USD $6 billion annually (Federal Reserve, 2024)
- In insurance, synthetic identities are used for: application fraud, phantom claimant creation, provider fraud, and premium fraud
- 85% of synthetic identities are not flagged by traditional identity verification methods (ID Analytics, 2024)
- The combination of AI-generated faces, synthetic documents, and fabricated digital histories creates identities that pass standard KYC checks
Manipulated Images and Video
- AI-manipulated photos in insurance claims (altered damage photos, staged accident scenes, modified property images) have increased an estimated 250% since 2022 (industry estimates)
- Image manipulation techniques include: object addition/removal, damage severity enhancement, date/location metadata alteration, weather/lighting modification
- Free online tools for image manipulation now include AI-powered features that previously required professional software and expertise
Detection Rates and Capabilities
Human Detection
- Human accuracy at detecting deepfake faces: 48% — worse than chance (University of Waterloo, 2024)
- Human accuracy at detecting AI-generated text: approximately 52% — barely above chance (University of Pennsylvania, 2024)
- Human accuracy at detecting manipulated documents: no comprehensive study available, but estimated at 30–60% depending on manipulation sophistication
- Training improves detection to approximately 65–75% for visual deepfakes but degrades over time as people forget training cues (MIT Media Lab, 2024)
AI-Assisted Detection
- AI-based deepfake detection accuracy: 90–99% on benchmark datasets (various academic papers, 2024–2025)
- Real-world detection accuracy is significantly lower: estimated 70–85% when accounting for novel generation techniques and adversarial attacks (NIST FATE evaluation, 2025)
- Detection accuracy degrades 15–25% when the deepfake was created by a tool not represented in the detector’s training data
- Ensemble detection methods (combining multiple detection approaches) achieve 92–97% accuracy in real-world conditions (NIST, 2025)
- False positive rates in commercial deepfake detection systems: 2–8% (industry benchmarks, 2025)
- Average time for AI-assisted detection: under 10 seconds per media item for automated screening
Detection Investment
- Global spending on deepfake detection technology reached USD $1.2 billion in 2025, projected to reach USD $4.8 billion by 2030
- 42% of insurers surveyed plan to invest in deepfake detection technology in 2026 (Deloitte, 2025)
- Currently, only 12% of insurers have deployed dedicated deepfake detection tools (ACORD, 2025)
- The gap between AI generation capability and detection deployment represents the primary vulnerability window for insurers
Cost and Impact
Per-Incident Costs
- Average cost of a deepfake-enabled fraud incident in financial services: USD $450,000 (Regula, 2024)
- Average cost of a traditional (non-AI) insurance fraud incident: USD $15,000–30,000 (NICB, 2024)
- Deepfake-enabled fraud incidents cost approximately 15–30x more than traditional fraud, reflecting the greater sophistication and scale of AI-enabled schemes
- Investigation costs for deepfake fraud cases are 3–5x higher than traditional fraud due to the need for specialized forensic analysis
Industry-Wide Costs
- Estimated total AI-enabled insurance fraud losses: USD $1.2 billion in 2025, projected to reach USD $4–5 billion by 2028 (Deloitte, industry estimates)
- AI-enabled fraud as a percentage of total insurance fraud: approximately 1.5% in 2025, projected to reach 8–12% by 2028
- The percentage understates the risk: AI-enabled fraud is growing at 100%+ annually while traditional fraud grows at 5–8%
Reputational and Operational Impact
- 63% of consumers say they would lose trust in an insurer that paid a deepfake-based fraudulent claim (Edelman Trust Barometer special report, 2025)
- Insurers that have experienced publicised AI fraud incidents saw 15–20% increases in customer churn in the following 12 months (McKinsey, 2025)
- Regulatory fines for inadequate fraud prevention are increasing: the UK’s FCA issued £23 million in fraud-related fines in 2024
- APRA in Australia has signalled that AI-enabled fraud risk management will be assessed as part of CPS 230 operational risk requirements
Projections and Trends
Near-Term (2026–2028)
- Deepfake quality will continue to improve, with consumer-grade tools reaching professional quality
- Real-time deepfakes will become standard in video-based fraud, undermining live verification
- Voice cloning will become indistinguishable from authentic speech for most listeners
- Document generation AI will produce output that passes current automated screening tools
- Detection technology will lag generation capability by an estimated 12–18 months
Medium-Term (2028–2030)
- C2PA content provenance standards may achieve sufficient adoption to shift the verification paradigm from detection to provenance
- Regulatory frameworks for AI-enabled fraud will mature in major markets
- Insurance industry losses to AI-enabled fraud are projected to reach USD $10+ billion annually if detection investment does not accelerate
- Biometric verification methods will need to evolve significantly to counter deepfake bypass techniques
- Industry collaboration through shared fraud intelligence platforms will become essential
Key Uncertainty
The central uncertainty in all projections is the relative pace of AI generation versus detection capability. If detection keeps pace, losses will be contained. If generation outpaces detection — as it has in 2023–2025 — the exponential growth in losses will continue.
Sources and Methodology
All statistics are drawn from named sources with publication dates. Where exact figures are not publicly available, we note estimates and the basis for estimation. Insurance fraud is inherently difficult to measure — most fraud goes undetected, and reporting standards vary across jurisdictions and lines.
We prioritize data from:
- Government agencies and regulators (FBI, NIST, APRA, FCA)
- Industry bodies (NICB, ICA, ABI, ACORD)
- Peer-reviewed academic research
- Established consulting and research firms (Deloitte, McKinsey, Verisk)
- Specialist fraud detection companies with published methodologies
Where company-published statistics may reflect commercial interests, we note this context and cross-reference where possible.
This page is updated as significant new data becomes available. For corrections or additional sources, contact us.
DeeTech provides AI-powered deepfake detection purpose-built for insurance. Our platform detects synthetic media, manipulated documents, and deepfake identities in claims workflows. See how it works or talk to our team.