Insurance Fraud · · 6 min read

Telehealth Fraud: When Deepfakes Meet Health Insurance Claims

How deepfake technology enables telehealth fraud — fabricated consultations, impersonated patients and providers, and what health insurers can do about it.

Telehealth went from a convenience to a necessity during the pandemic. It stayed because it works — for patients, providers, and insurers alike. But the same technology that enables a patient to consult a doctor from their living room also enables a fraudster to fabricate consultations that never happened, impersonate patients who never needed treatment, or impersonate providers who never practised medicine.

This isn’t speculative. The capabilities are here. Real-time deepfake technology — demonstrated in the Hong Kong CFO case where multiple executives were impersonated simultaneously on a video call — is directly applicable to telehealth fraud.

The Telehealth Fraud Landscape

Scale of Telehealth

Telehealth adoption remains elevated post-pandemic. The US Department of Health and Human Services has reported sustained telehealth utilisation across Medicare, Medicaid, and commercial insurance. Many insurers now require telehealth as a first-line consultation for routine and follow-up care.

This creates a massive digital surface area for fraud:

  • Millions of telehealth consultations per year across major insurers
  • Video, audio, and text-based consultations — all digital, all recordable, all manipulable
  • Remote environment means no physical verification of patient identity or condition
  • Automated billing tied to consultation records

The Health Insurance Fraud Baseline

The US National Health Care Anti-Fraud Association (NHCAA) estimates that healthcare fraud costs the US healthcare system tens of billions of dollars annually. HHS recovered US$5.9 billion in healthcare fraud judgments and settlements in recent years. The AARP has estimated Medicare fraud alone at approximately US$60 billion per year.

Telehealth adds new fraud vectors on top of this existing baseline — not replacing traditional fraud but expanding the attack surface.

How Deepfakes Enable Telehealth Fraud

Scenario 1: Phantom Consultations

The fraud: A provider (or someone impersonating a provider) claims to have conducted telehealth consultations that never occurred, billing the insurer for each one.

How deepfakes help: AI-generated consultation recordings — video of a fabricated patient interaction, or audio of a simulated conversation — serve as “evidence” that the consultation occurred. If the insurer audits the claim, the recording exists as supporting documentation.

Current capability: Video generation tools can produce short clips of realistic conversations. Voice cloning can generate both sides of a dialog. Combined, they produce a fabricated consultation recording that would pass a cursory audit.

Scenario 2: Patient Impersonation

The fraud: Someone impersonates an insured patient during a telehealth consultation to obtain prescriptions, referrals, or diagnostic services under the patient’s insurance coverage.

How deepfakes help: Real-time face-swapping technology allows the fraudster to appear as the patient during a live video consultation. The provider sees what appears to be their patient and proceeds normally.

Current capability: Real-time face-swapping is production-ready. Consumer tools can run face-swaps during video calls with minimal latency. The University of Waterloo research on voice authentication bypass, combined with visual deepfakes, makes multi-modal impersonation feasible.

Scenario 3: Provider Impersonation

The fraud: Someone impersonates a licensed medical provider to conduct consultations and bill the insurer. The “provider” may have no medical qualification, or may be practising without a license in the relevant jurisdiction.

How deepfakes help: The fraudster uses deepfake technology to appear as a registered provider during video consultations with real patients. Patients receive consultations from someone they believe is a qualified doctor.

Insurance impact: The insurer pays for consultations conducted by unqualified individuals. Beyond the financial loss, there’s a patient safety dimension — medical advice from unqualified persons creates liability exposure for the insurer and the platform.

Scenario 4: Upcoded Consultations

The fraud: A real consultation occurs, but the provider bills for a more complex (and expensive) service than was delivered. The telehealth recording is manipulated to support the higher billing code.

How deepfakes help: AI editing tools can modify a consultation recording to add symptoms, discussions, or examinations that didn’t occur — supporting the higher billing code if the claim is audited.

Current capability: Video editing with AI inpainting can seamlessly alter specific segments of a recording. Audio manipulation can add or modify dialog.

Scenario 5: Fabricated Medical Evidence

The fraud: Claims for treatment that was never needed, supported by fabricated diagnostic evidence generated during a “telehealth consultation.”

How deepfakes help: AI-generated medical imagery (X-rays, MRIs, photographs of conditions) supports the fraudulent diagnosis. Combined with a fabricated or manipulated consultation recording, the entire evidence chain — from consultation to diagnosis to treatment — is synthetic.

Current capability: AI image generation can produce medical-style imagery. While current tools don’t generate diagnostically accurate medical images, they produce images that appear plausible to non-specialist reviewers — which is sufficient for most claims audits.

Why Telehealth Fraud Is Hard to Detect

The Verification Gap

In-person consultations provide natural verification:

  • Physical presence confirms the patient’s identity
  • Direct observation confirms the patient’s condition
  • The provider’s credentials are verifiable at their practice location
  • Medical equipment produces results that are harder to fabricate

Telehealth removes all of these:

  • Identity verification is typically limited to name and date of birth
  • The patient’s condition is observed through a camera — a manipulable medium
  • The provider’s identity relies on platform credentials, not physical presence
  • All evidence is digital — captured, transmitted, and stored in formats that can be manipulated

Scale and Volume

Insurers process millions of telehealth claims. Manual review of consultation recordings is impractical at scale. Audit sampling catches a tiny fraction of claims. The economics of audit mean that most telehealth claims are paid without the recording ever being reviewed.

Jurisdictional Complexity

Telehealth crosses jurisdictions — a patient in one state consulting a provider in another. Verification of provider credentials, scope of practice, and billing legitimacy across jurisdictions adds complexity that creates audit gaps.

Detection Approaches

At the Platform Level

Telehealth platforms can implement detection at the point of consultation:

Presentation attack detection. During the video call, analyze the video stream for deepfake indicators — face-swap artifacts, temporal inconsistencies, and liveness signals that distinguish a real person from a synthetic feed.

Device attestation. Verify that the video feed comes from a genuine camera on a genuine device, not from a virtual camera or media injection tool. This addresses injection attacks where synthetic video is fed directly into the video pipeline.

Behavioral biometrics. Analyze interaction patterns — typing rhythm, mouse movement, scrolling behavior — during the consultation to establish that the participant is consistent with the registered patient.

At the Claims Level

Insurers can implement detection at the point of claims processing:

Recording analysis. When consultation recordings are submitted or audited, analyze them for manipulation indicators — spliced audio, inserted video segments, inconsistent lighting or backgrounds.

Cross-claim pattern analysis. Identify patterns across claims — the same “patient” appearing in an unusual number of telehealth consultations, the same “provider” billing across an implausible number of patients, or consultation recordings sharing technical fingerprints suggesting common origin.

Metadata verification. Verify that consultation metadata (timestamps, duration, connection quality metrics) is consistent with a genuine telehealth session and matches the billing codes submitted.

Document verification. Analyze any supporting documentation (prescriptions, referrals, diagnostic reports) for AI generation or manipulation indicators.

Provider Verification

Credential cross-referencing. Verify that the provider appearing in the consultation matches the registered provider for the claimed service — using biometric matching against credential photos.

Practice pattern analysis. Identify providers with billing patterns that deviate significantly from peers in the same specialty and geography.

What Health Insurers Should Do

Immediate Actions

  1. Audit your telehealth claims pipeline. Understand what proportion of claims involve telehealth, what evidence is submitted, and what verification occurs at each stage.

  2. Require platform-level identity verification. Work with telehealth platform partners to ensure patient and provider identity is verified at the start of each consultation — not just at platform registration.

  3. Implement random recording audits. Even without AI detection, reviewing a sample of consultation recordings deters fraud by establishing that recordings are reviewed.

Medium-Term Investments

  1. Deploy AI detection on consultation recordings. Analyze recordings for manipulation indicators as part of the claims audit process.

  2. Integrate detection with billing review. Correlate detection findings with billing patterns to identify providers or patients with both manipulation indicators and unusual billing.

  3. Establish telehealth-specific fraud indicators. Traditional fraud triggers (timing, value, history) need telehealth-specific additions: consultation frequency, provider diversity, geographic patterns, and recording characteristics.


deetech analyses video, audio, and document evidence submitted with insurance claims — including telehealth consultation recordings. Our detection covers face manipulation, voice synthesis, video editing, and document forgery. Request a demo to discuss telehealth fraud prevention.

Sources cited in this article: