Deepfakes Changed What Courts Require From Digital Evidence

A piece published this week by Cliffe Dekker Hofmeyr addresses a question that should be on every claims attorney's desk: as AI-generated content becomes harder to distinguish from real footage, what does the burden of proof for digital evidence look like in litigation?

The answer isn't just a legal problem. It's a documentation problem.

What Authentication Actually Means Under FRE 901

FRE 901(b)(9) authenticates evidence by demonstrating that the process producing it is reliable. Not by pointing to a metadata field. Not by naming the device that captured the image. By establishing that the process itself, from capture to verification, is trustworthy.

This matters because the most common way claims professionals currently authenticate digital evidence is metadata. EXIF data. File properties. The creation date in the filename.

None of that survives scrutiny.

EXIF data can be edited. File timestamps can be changed in seconds on any operating system with no technical skill required. A defense attorney doesn't need a computer scientist to challenge these. They need a Google search and five minutes.

The AI Problem Accelerates This

Before widespread AI-generated media, a defense challenge to digital evidence looked like this: you're alleging the photo was altered, so prove it. The burden sat with the challenging party.

That's shifting. Courts and litigants are increasingly aware that convincing video can be generated from scratch. That photos can be manipulated in ways that leave no detectable artifact. That AI can produce a plausible image of property damage that never existed.

The Cliffe Dekker Hofmeyr piece frames it squarely in the litigation context: the burden of proof for digital evidence is being renegotiated. That's the right framing.

What I'd add: if you're in claims, you're not waiting for courts to renegotiate this in your favor. You're trying to document losses today, in a system that's already skeptical of unanchored digital evidence and getting more skeptical.

What Daubert Looks For

The Daubert standard asks whether a method is testable, peer-reviewed, known for error rates, and generally accepted in the relevant community.

Apply that to blockchain timestamps.

Testable: SHA-256 is deterministic. The same file always produces the same hash. Any party can verify independently.

Peer-reviewed: SHA-256 has been examined by cryptographers for decades. Polygon and Bitcoin have transparent, documented consensus mechanisms.

Known error rate: Hash collisions at the SHA-256 level are computationally infeasible. The anchor either exists on-chain or it doesn't. There's no gray area.

Generally accepted: Blockchain timestamps are increasingly referenced in discovery and authentication arguments across multiple jurisdictions.

EXIF timestamps pass none of these tests. They're not tamper-evident, not independently verifiable, and not based on a process with known integrity characteristics.

Having the File Isn't the Same as Having Proof

Most claims documentation workflows haven't changed in response to any of this. Photos get taken, uploaded to the claim file, timestamped by the platform, and treated as evidence. The assumption is that having the file is the same as having proof.

It isn't.

Having a file proves the file exists. A blockchain anchor proves the file existed before a specific point in time, and that its content hasn't changed since.

That's a different statement. Consider a $400,000 subrogation dispute where the only challenge is timing. Not whether the damage was real, but whether those photos existed before the loss date. The difference between answering that question and not answering it can be the case.

ProofLedger anchors a SHA-256 hash of any file to both Polygon and Bitcoin. The file stays on the user's machine. What goes on-chain is the hash: a 64-character fingerprint mathematically tied to that exact file. The anchor is timestamped by the blockchain, not by a metadata field. It's verifiable by any party with the hash and a network connection.

Dual-chain anchoring matters for admissibility arguments. Polygon handles the instant, low-cost anchor. Bitcoin provides a second-layer confirmation through daily batch anchoring with merkle proofs. Two independent chains verifying the same anchor is a stronger authentication argument than one.

What C2PA Adds, and Where It Ends

Some cameras and platforms are adopting C2PA, an industry standard backed by Adobe, Google, Meta, Microsoft, and OpenAI. C2PA embeds cryptographically signed provenance metadata directly in the file: who captured it, when, with what device, what edits were applied.

This is useful. For brand-new captures on compatible hardware, C2PA provides a verifiable origin record.

The limitation: the credential lives inside the file. Upload the file to a platform, convert the format, transfer it through a system that doesn't preserve metadata, and the credential can be stripped. In a legal context, if the C2PA credential doesn't survive to the point of dispute, the authentication argument doesn't either.

A blockchain anchor is independent of the file. It lives on the ledger permanently. No platform strips it, no format conversion removes it.

The strongest documentation uses both: C2PA for origin provenance, blockchain for independent temporal verification. One lives inside the file. The other exists outside of it entirely.

Monday Morning

If you're handling claims today, the practical implication is this: document before loss events when you can. Document at first site visit when you can't. Anchor the hash.

The question "when was this taken" is survivable if you have an answer that doesn't depend on a metadata field. It isn't survivable if the only answer is a file timestamp that any defense attorney can challenge in ten minutes.

FRE 901(b)(9) authenticates process. A blockchain anchor is process-based authentication. EXIF is not.

The AI-generated content problem doesn't make this more complicated. It makes what was already insufficient more visible.

Build the chain of custody before you need it. That's not a technology argument. It's a claims argument.

(link in first comment)

---

Has your team had a claim where the challenge was timing specifically, not whether the damage existed but whether the documentation predated the loss?