I know OpenAI watermarks their stuff. But I wish they wouldn't. It's a "false" trust.
Now it means whoever has access to uncensored/non-watermarking models can pass off their faked images as real and claim, "Look! There's no watermark, of course, it's not fake!"
Whereas, if none of the image models did watermarking, then people (should) inherently know nothing can be trusted by default.
Yeah, I'd go the other way. Camera manufacturers should have the camera cryptographically sign the data from the sensor directly in hardware, and then provide an API to query if a signed image was taken on one of their cameras.
Add an anonymizing scheme (blind signatures or group signatures), done.
There are ways to tell if an image is real, if it's been signed cryptographically by the camera for example, but increasingly it probably won't be possible to tell if something is fake. Even if there's some kind of hidden watermark embedded in the pixels, you can process it with img2img in another tool and get rid of the watermark. Exif data, etc is irrelevant, you can get rid of it easily or fake it.
Sure, you can always remove it, but an average person posting AI images on Facebook or whatever probably won't bother. I was skeptical of Google's SynthID when I first heard about it but I've been seeing it used to identify suspected AI images on Reddit recently (the example I saw today was cropped and lightly edited with a filter but still got flagged correctly) and it's cool to have a hard data point when present. It won't help with bad/manipulative actors but a decent mitigation for the low effort slop scenario since it can survive the kind of basic editing a regular person knows how to do on their phone and typical compression when uploading/serving.
Not if you strip the EXIF data. Also, it will strip the star watermark and SynthID from Gemini if you paste a Nano Banana pic in and tell it to mirror it.
I just checked several of the files uploaded to the news post, the "previous" and "new", both the png and webp (&fm=webp in url) versions - none had the content metadata. So either the internal version they used to generate them skipped them, or they just stripped the metadata when uploading.
I think society is going to need the opposite - cameras that can embed cryptographic information in the pixels of a video indicating the image is real.