Insurance Deepfake Fraud Incidents: A Growing Timeline
Tracked timeline of deepfake and AI-generated fraud incidents relevant to insurance — confirmed cases, near-misses, and emerging patterns.
This page tracks confirmed deepfake and AI-generated fraud incidents relevant to the insurance industry. It includes direct insurance fraud cases, adjacent financial fraud cases that demonstrate capabilities applicable to insurance, and research findings that signal emerging threats.
Last updated: February 2026
Not every incident below is an insurance claim. Many are from banking, identity verification, or corporate fraud — but each one demonstrates a capability that directly threatens insurance claims processes. We include them because the tools used in these incidents are the same tools that will be (or already are being) used against insurers.
2024
February 2024 — Hong Kong: US$25.6M Deepfake Video Call
What happened: Fraudsters used real-time deepfake technology to impersonate multiple executives of a multinational company during a video conference call. An employee in the Hong Kong office was instructed to transfer US$25.6 million across 15 transactions.
Source: CNN
Insurance relevance: Demonstrates that real-time video deepfakes are production-ready for high-value fraud. Insurance video inspections, telehealth consultations, and video-based identity verification are all vulnerable to the same technique. If a deepfaked CFO can authorize US$25.6M in transfers, a deepfaked claimant can approve a fraudulent insurance settlement.
March 2024 — FinCEN Alert: Deepfake Fraud Targeting Financial Institutions
What happened: The US Financial Crimes Enforcement Network (FinCEN) issued an alert warning financial institutions about the increasing use of deepfakes and generative AI in fraud schemes — including identity fraud, document forgery, and social engineering.
Source: FinCEN Alert FIN-2024-Alert005
Insurance relevance: FinCEN’s alert confirmed that deepfake fraud is no longer theoretical — it’s occurring at sufficient scale to warrant a federal advisory. The alert specifically mentioned altered identity documents and manipulated biometric verification, both of which are directly applicable to insurance KYC and claims processes.
2024 — Voice Cloning Bypasses Bank Security
What happened: A Wall Street Journal reporter cloned her own voice with AI and successfully bypassed her bank’s voice authentication system. Separately, University of Waterloo researchers developed a method to bypass voice authentication with up to 99% success in just six attempts.
Insurance relevance: Insurance call centers use voice-based verification for customer identification. Voice cloning can impersonate policyholders to file fraudulent claims, change policy details, redirect payments, or extract information about existing claims.
2024 — Deepfake Job Candidate Interviews
What happened: Multiple companies reported encountering job candidates using real-time deepfake technology during video interviews — either to impersonate someone else or to conceal their identity. The FBI Internet Crime Complaint Center (IC3) issued a warning about this trend.
Insurance relevance: While not directly insurance fraud, this demonstrates the accessibility of real-time deepfake technology. The same tools used for fake job interviews can be used for fake insurance assessments, fake telehealth consultations, and fake video statements.
2024 — Sumsub Reports 10x Increase in Deepfake Identity Fraud
What happened: Sumsub’s identity fraud research documented a tenfold increase in detected deepfakes used for identity fraud between 2022 and 2024, with identity fraud rates rising from 1.1% to 2.5% of all verifications.
Insurance relevance: Insurance onboarding and claims processes include identity verification. A 10x increase in deepfake identity attempts across industries means insurers are facing the same escalation — whether they’re detecting it or not.
2025
Early 2025 — Pindrop: US$12.5B Contact Center Fraud
What happened: Pindrop’s 2025 Voice Intelligence and Security Report documented US$12.5 billion in contact center fraud losses in 2024, with voice cloning identified as a growing vector. The report noted a 170% increase in voice phishing attacks.
Insurance relevance: Insurers operate contact centers for claims intake, policy servicing, and customer support. Contact center fraud losses are directly applicable — voice-cloned calls to insurance call centers represent the same threat vector documented by Pindrop.
Mid 2025 — Signicat: 2,100% Surge in Deepfake Fraud
What happened: Signicat’s research documented a 2,100% increase in deepfake-based fraud attempts over three years, with deepfakes now representing 6.5% of all fraud attacks. The research also found that only 22% of organizations had implemented specific countermeasures.
Insurance relevance: A 2,100% increase in deepfake fraud across financial services, combined with only 22% of organizations having countermeasures, means the insurance industry — which has been slower to adopt AI detection than banking — is particularly exposed.
Mid 2025 — Regula: 92% of Businesses Hit by Identity Fraud
What happened: Regula’s deepfake research found that 92% of businesses worldwide experienced identity fraud in the past 12 months, with average deepfake-related losses in the financial industry reaching US$600,000 per incident.
Insurance relevance: US$600,000 per incident aligns with the scale of mid-to-large insurance claims. A single successful deepfake claim — a fabricated total loss on a high-value vehicle, an exaggerated property damage claim, or a fraudulent workers’ compensation claim — can easily reach this threshold.
2025 — Booz Allen Hamilton: AI-Generated Claims Targeting Government Benefits
What happened: Booz Allen Hamilton published research on deepfakes targeting government benefits systems with AI-generated claims — including fabricated documentation and synthetic identities used to file fraudulent benefit claims.
Insurance relevance: Government benefits fraud and insurance claims fraud share the same mechanics: submit fabricated evidence to receive payment. The techniques documented by Booz Allen — AI-generated documents, synthetic identities, and manipulated evidence — apply directly to commercial insurance claims.
December 2025 — Gartner Identifies Reality Defender as Deepfake Detection Leader
What happened: Gartner published “AI Vendor Race: Reality Defender Is the Company to Beat in Deepfake Detection” — the first major analyst recognition of the deepfake detection market as a distinct category.
Insurance relevance: Gartner’s recognition signals that deepfake detection is transitioning from niche to mainstream enterprise infrastructure. For insurers, this means the technology is mature enough for production deployment and the market is developing rapidly.
Patterns and Trends
What the Timeline Shows
-
Escalating sophistication. The progression from text-to-image generation (2023-early 2024) to real-time video deepfakes (mid 2024) to voice cloning at scale (2025) shows consistent capability expansion.
-
Decreasing barriers. Early deepfakes required technical expertise. Current tools are consumer-accessible. The pool of potential fraudsters is expanding from organized criminals to opportunistic individuals.
-
Financial sector as leading indicator. Banking and financial services have been the primary target for deepfake fraud. Insurance is structurally similar (document submission, identity verification, payment authorisation) and should expect the same threat trajectory with a 12-18 month lag.
-
Detection gap widening. The ratio of attack capability to detection deployment is worsening. Fraud tools are advancing faster than detection adoption.
-
Regulatory response accelerating. FinCEN alerts, APRA risk management requirements, EU AI Act provisions — regulators are beginning to respond, but specific insurance guidance hasn’t arrived yet.
What’s Missing from the Timeline
This timeline is limited by what’s been publicly reported. Several categories of incidents are likely occurring but not yet publicly documented:
- Individual insurance claims with AI-manipulated photos — these would be handled internally by insurers and not reported publicly
- Coordinated fraud rings using AI tools — these may be under investigation
- Exaggeration fraud using AI editing — the most common and hardest to detect, unlikely to make headlines
The absence of public insurance-specific incidents doesn’t indicate absence of the threat — it indicates absence of detection and reporting.
Related Reading
This timeline is updated quarterly as new incidents and research are published. If you’re aware of a deepfake fraud incident relevant to insurance that should be included, contact us.
deetech monitors the deepfake threat landscape continuously and updates its detection models accordingly. Request a demo to discuss how we can protect your claims process.
Sources cited: