hhmx.de

Terence Tao

Föderation EN Di 30.01.2024 17:05:28

The ability of tools to readily generate highly convincing "" text, audio, images, and (soon) video is, arguably, one of the greatest near-term concerns about this emerging technology. Fundamental to any proposal to address this issue is the ability to accurately distinguish "deepfake" content from "genuine" content. Broadly speaking, there are two sides to this ability:

* Reducing false positives. That is, reducing the number of times someone mistakes a deepfake for the genuine article. Technologies to do so include watermarking of human and AI content, and digital forensics.

* Reducing false negatives. That is, reducing the number of times one believes content that is actually genuine content to be a deepfake. There are cryptographic protocols to help achieve this, such as digital signatures and other provenance authentication technology.

Much of the current debate about deepfakes has focused on the first aim (reducing false positives), where the technology is quite weak (AI, by design, is very good at training itself to pass any given metric of inauthenticity, as per Goodhart's law); also, measures to address the first aim often come at the expense of the second. However, the second aim is at least as important, and arguably much more technically and socially feasible, with the adoption of cryptographically secure provenance standards. One such promising standard is the C2PA standard c2pa.org/ that is already adopted by several major media and technology companies (though, crucially, social media companies will also need to buy into such a standard and implement it by default to users for it to be truly effective).

ToucanIan

Föderation EN Di 30.01.2024 17:14:10

@tao good point.

l'empathie mécanique

Föderation EN Di 30.01.2024 17:43:16

@tao The cryptography infrastructure would be broken܍ in no time and then the courts would have to face “cryptographically secure” fakes.

܍ Knowing ahem.. state actors, the thing would be backdoored through and through so the services could do their thing whenever they need.

Terence Tao

Föderation EN Di 30.01.2024 17:54:19

@dpwiz Badly designed cryptosystems can be broken in a number of ways, but well designed ones, particularly ones with a transparent implementation and selection process, are orders of magnitude more secure. Breaking SHA-2 for instance - which the C2PA protocol uses currently - would not simply require state-level computational resources, but a genuine mathematical breakthrough in cryptography.

Perhaps ironically, reaching the conclusion "all cryptosystems can be easily broken" from historical examples of weak cryptosystems falling to attacks, is another example of eliminating false positives (trusting a cryptosystem that is weak) at the expense of increasing false negatives (distrusting a cryptosystem that is strong).

l'empathie mécanique

Föderation EN Di 30.01.2024 22:05:56

@tao It’s in their published threat model / security assumptions:

Attackers do not have access to private keys referenced within the C2PA ecosystem (e.g., claim signing private keys, Time-stamping Authority private keys, etc.). They may, however, attempt to access these keys via exploitation techniques…

And later, in the spoofing section.

Proper key handling is notoriously difficult. And with incentives like here, attackers would be motivated to hit it even more than some DRM system.

And anyway, no need for a breakthrough if you can walk in with a gag order and do what you need.

nf

Föderation EN Di 30.01.2024 21:43:31

@tao crypto provenance for all media is a huge benefit to mass surveillance. I’m not sure whether I’d prefer to live in a world full of fake media or under the tyranny of everything I do being verifiably traceable back to me. Probably the former TBH.

JJ :blobblackcat:

Föderation EN Di 30.01.2024 21:46:07

@tao from the c2pa technical specification:

A very common scenario will be a user (called an actor in the C2PA ecosystem) taking a photograph with their C2PA-enabled camera (or phone).

this seems like it would necessitate unprecedented levels of integration across hardware, software, and corporations controlling such. and it strongly reminds me of similar efforts put forth by similar companies to control what people can do with their devices (DRM). given the reliance on the verifiability of such low level functions as camera software (presumably via signed binaries) it could not possibly coexist with modifiable, user-controllable software (and hardware).

why is this good, then? it seems in fact quite bad.