Compliance & Regulation · · 8 min read

Will Insurers Be Liable for Missing Deepfake Fraud? Legal Analysis

Legal analysis of emerging insurer liability for failing to detect deepfake fraud, including duty of care arguments, D&O implications, E&O exposure, and.

In 2024, a Hong Kong finance worker transferred USD 25 million to fraudsters after a deepfake video call impersonating the company’s CFO. The incident raised immediate questions: should the company have had deepfake detection technology in place? Would its auditors be liable for not flagging the vulnerability? Could its insurers deny a claim on the basis that reasonable precautions were not taken?

These questions are now reaching the insurance industry itself. As deepfake fraud escalates — the FBI reported a 300% increase in AI-enabled fraud complaints between 2023 and 2025 — a parallel legal question is crystallising: when an insurer fails to detect a deepfake-enabled fraudulent claim, and pays out on it, who bears the loss?

The answer is shifting, and it is shifting against insurers who fail to deploy available detection technology.

The Emerging Duty of Care

Common Law Foundations

Insurance law has long imposed a duty on insurers to investigate claims with reasonable care. The precise formulation varies by jurisdiction:

  • In Australia, the duty of utmost good faith under the Insurance Contracts Act 1984 (s13) requires insurers to act with due regard to the interests of policyholders — which includes not inflating premiums by paying fraudulent claims that should have been detected
  • In England and Wales, the duty of good faith in insurance is well established, and courts have recognized that insurers should conduct reasonable investigations before paying claims
  • In the United States, the duty varies by state, but most jurisdictions require insurers to conduct reasonable investigations of claims

The “Available Technology” Argument

The critical legal development is the application of the “available technology” standard to fraud detection. The argument runs:

  1. Deepfake detection technology exists and is commercially available
  2. The technology is effective at identifying synthetic media used in fraudulent claims
  3. The cost of deploying the technology is modest relative to the fraud losses it prevents
  4. An insurer that fails to deploy available, cost-effective detection technology is not exercising reasonable care

This mirrors the development of other technology-based duty of care standards. Hospitals are not liable for failing to use medical technology that does not exist — but they may be liable for failing to use established, available diagnostic tools. The same logic increasingly applies to insurers and fraud detection.

The Learned Hand Test

In US tort law, the Learned Hand formula (from United States v. Carroll Towing Co., 1947) provides a framework: if the burden of precaution (B) is less than the probability of loss (P) multiplied by the magnitude of the loss (L), failure to take the precaution is negligent.

Applied to deepfake detection:

  • B (burden): The cost of deploying deepfake detection technology — typically measured in cents per claim analyzed
  • P (probability): The likelihood that any given claim involves deepfake fraud — estimated at 1-3% for digital claims channels (Deloitte, 2025)
  • L (loss): The average loss per fraudulent claim — approximately USD 12,000 for property insurance, significantly higher for specialty lines (NICB, 2024)

Even conservative estimates produce a B < PL result, suggesting that failure to deploy detection technology may constitute negligence.

D&O Implications for Insurance Company Boards

Directors and officers of insurance companies face growing exposure for deepfake fraud losses attributable to governance failures.

The Board’s Oversight Duty

Under corporate law in most jurisdictions, directors owe duties of care and diligence. In the insurance context, this includes oversight of the company’s risk management framework — including fraud risk.

The Caremark standard in US law (and its equivalents elsewhere) holds directors liable for failing to implement adequate reporting and monitoring systems. A board that:

  • Is aware of the deepfake fraud threat (and by 2026, ignorance is not credible)
  • Fails to ensure management has evaluated deepfake detection solutions
  • Does not require reporting on fraud detection effectiveness
  • Has not allocated resources to address the identified risk

…may face derivative claims from shareholders, regulatory action, or both.

Quantifying the Exposure

Consider a mid-size general insurer with AUD 2 billion in gross written premium and a claims expense ratio of 65%. If deepfake fraud represents even 1% of claims expenses, the annual exposure is AUD 13 million. Over a board tenure of several years, cumulative exposure could reach tens of millions.

If the board was presented with deepfake detection solutions costing a fraction of this amount and declined to invest, the D&O liability argument becomes compelling.

D&O Insurance Implications

The irony is acute: D&O insurers may face claims arising from the insured company’s failure to detect deepfake fraud. This creates a potential conflict of interest and a circular liability chain that the insurance industry has not yet fully reckoned with.

D&O policies typically exclude claims arising from fraud committed by the insured directors themselves, but not claims arising from negligent oversight of fraud committed by third parties. The deepfake fraud scenario falls into the latter category.

E&O Exposure for Claims Adjusters and Investigators

Errors and omissions exposure extends beyond the board to the claims function.

The Adjuster’s Standard of Care

Claims adjusters and investigators are expected to exercise the skill and diligence of a reasonably competent professional. As deepfake technology becomes mainstream, the standard of what constitutes competent investigation is evolving.

An adjuster who:

  • Accepts video or photographic evidence at face value without considering synthetic media risk
  • Is not trained to recognize indicators of manipulated media
  • Does not escalate high-risk claims for technical analysis
  • Ignores available detection tools in the claims workflow

…may be found to have fallen below the professional standard of care.

Professional Indemnity Claims

If an adjuster’s failure to detect a deepfake leads to an improper payout, the insurer may seek recovery through the adjuster’s professional indemnity insurance (or the insurer’s own E&O policy for in-house adjusters). This is not theoretical — as deepfake fraud cases move through the courts, E&O claims will follow.

Training as Risk Mitigation

The most effective E&O risk mitigation is training. Adjusters who have been trained on deepfake risks, who have access to detection tools, and who follow documented investigation protocols are in a far stronger position to defend their decisions.

Regulators are increasingly treating fraud detection capability as a compliance requirement rather than a business decision.

Australia

APRA’s expectations for operational risk management include fraud detection as a component of claims management. While APRA has not yet taken enforcement action specifically for failure to detect deepfake fraud, its supervisory approach is tightening:

  • Thematic reviews of fraud management practices are planned for 2026-2027
  • APRA’s risk culture assessments include fraud detection capability
  • The Insurance Council of Australia’s General Insurance Code of Practice (2020, updated 2024) requires “effective and fair” fraud management processes

United States

State insurance departments are moving toward explicit requirements:

  • NAIC’s Model Bulletin on AI creates expectations for technology deployment in fraud detection
  • California’s Department of Insurance has signalled enforcement interest in insurers with inadequate fraud detection
  • New York’s DFS has included fraud detection in its IT examination scope

European Union

The EU AI Act and Solvency II framework create overlapping obligations:

  • Solvency II’s operational risk requirements extend to fraud management
  • EIOPA has issued guidance on insurers’ operational resilience, including technology adoption
  • National regulators in Germany, France, and the Netherlands have increased fraud management scrutiny

United Kingdom

The FCA’s Consumer Duty (effective July 2023) requires firms to deliver good outcomes for retail customers. Paying fraudulent claims inflates premiums for honest policyholders, which could constitute poor customer outcomes under the Duty.

Subrogation and Recovery Complications

When an insurer pays a claim that later proves to have been deepfake-enabled fraud, recovery is complicated:

Against the Fraudster

Recovery against the perpetrator of deepfake fraud is theoretically straightforward — they obtained money by deception. In practice, these individuals are often difficult to identify, located in jurisdictions with weak enforcement, or judgment-proof.

Against Technology Vendors

If an insurer deployed deepfake detection technology that failed to catch the fraud, does the insurer have a claim against the vendor? This depends on:

  • The vendor’s contractual warranties regarding detection accuracy
  • Whether the specific deepfake variant was within the technology’s designed detection capability
  • Whether the insurer properly integrated and operated the detection system
  • Whether the vendor’s marketing claims were accurate

Against Reinsurers

Reinsurers are increasingly scrutinising cedants’ fraud detection practices. A reinsurer might argue that failure to deploy available deepfake detection technology constitutes a breach of the duty of utmost good faith, potentially voiding reinsurance recovery for deepfake-related losses. While this argument has not yet been tested in court, it is being raised in treaty negotiations.

The Insurance Industry’s Response

The industry is responding to these liability pressures, but unevenly:

Leaders

A small group of insurers — primarily large, technology-forward carriers — have deployed deepfake detection across their digital claims channels. These insurers cite both fraud savings and liability reduction as motivations.

Fast Followers

A larger group is actively evaluating deepfake detection solutions, conducting pilots, and developing implementation plans. Many are motivated by the regulatory signals described above.

Laggards

A concerning number of insurers have not yet seriously evaluated deepfake detection technology. Common justifications include:

  • “We haven’t seen deepfake fraud in our book” — likely because they lack the tools to detect it
  • “The technology is not mature enough” — contradicted by detection accuracy rates exceeding 95% for current-generation deepfakes
  • “It’s too expensive” — contradicted by the cost-benefit analysis described above
  • “Our existing fraud controls are adequate” — without evidence that existing controls can detect synthetic media

These justifications are unlikely to withstand regulatory scrutiny or legal challenge.

Risk Mitigation Strategy

For insurers seeking to manage their liability exposure:

Immediate Actions

  1. Board briefing on deepfake fraud risk and available detection technology
  2. Gap analysis of current fraud detection capabilities against deepfake threats
  3. Market assessment of deepfake detection solutions, including purpose-built insurance solutions
  4. Training program for claims adjusters on synthetic media awareness

Medium-Term Actions

  1. Pilot deployment of deepfake detection in highest-risk claims channels
  2. Policy review to ensure fraud management frameworks address synthetic media
  3. Reinsurance discussion to clarify expectations and treaty compliance
  4. Regulatory engagement to understand supervisory expectations

Ongoing Actions

  1. Monitor detection effectiveness and update technology as deepfakes evolve
  2. Document everything — the board’s consideration, the technology evaluation, the deployment decision, and ongoing performance
  3. Review and update the fraud detection program at least annually

The Trajectory

The legal trajectory is clear. As deepfake detection technology becomes more widely adopted and more affordable, the standard of care for insurers will ratchet upward. Insurers that deployed early will be in a defensible position. Those that delayed without justification will face increasing liability.

The precedent from other technology adoption curves — cybersecurity, data encryption, automated underwriting controls — suggests that within 3-5 years, deepfake detection will be considered a baseline requirement for competent claims management. Insurers that are not there by then will be outliers, and outliers attract both litigation and regulatory attention.

Conclusion

The question posed in this article’s title — will insurers be liable for missing deepfake fraud? — has a straightforward answer: increasingly, yes. The duty of care is evolving, the technology is available, the cost-benefit analysis favours deployment, and regulators are raising expectations.

The liability exposure spans the organization: boards face D&O claims for governance failures, adjusters face E&O exposure for investigation failures, and the company faces regulatory action for compliance failures. The cost of deployment is a fraction of the cost of any of these outcomes.

The time for evaluation is over. The time for deployment is now. Contact deetech to discuss how deepfake detection can be integrated into your claims workflow — and your liability mitigation strategy.


This article is for informational purposes only and does not constitute legal, regulatory, or compliance advice. Insurers should consult qualified legal and compliance professionals for guidance specific to their circumstances and jurisdiction.