Reverse engineering generative models from a single deepfake image

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,602
I am not fan of FB and deleted my account long ago, but this is surely a step in the right direction on the topic of deepfakes, which I think is something that could certainly be weaponized in many ways.

Reverse engineering generative models from a single deepfake image


Deepfakes have become more believable in recent years. In some cases, humans can no longer easily tell some of them apart from genuine images. Although detecting deepfakes remains a compelling challenge, their increasing sophistication opens up more potential lines of inquiry, such as: What happens when deepfakes are produced not just for amusement and awe, but for malicious intent on a grand scale? Today, we — in partnership with Michigan State University (MSU) — are presenting a research method of detecting and attributing deepfakes that relies on reverse engineering from a single AI-generated image to the generative model used to produce it. Our method will facilitate deepfake detection and tracing in real-world settings, where the deepfake image itself is often the only information detectors have to work with.

Lots more to read in the article.
 
Ever since deep fakes started being discussed I have thought that it would be a perfect application for machine learning.

You could have it scan a very large sample size of real and fake content to train it and then see if it is able to with confidence tell you which is which.

A lot of the AI/Machine learning being done is dumb and or borderline malicious, but these people are doing important work!
 
Someone had to do this eventually. The potential for abuse was just too big (and no doubt already going on in the wild).
 
From the article "The results showed that our approach performs substantially better than the random ground-truth baseline"

They're being pretty vague. I wonder what percentage they could actually identify.

Also, they require the original image of a deepfake image generated by the same algorithm to be able to do that. If it's a brand new deepfake image generated by an algorithm not yet seen it won't work. Although I believe this could also be trained to identify general deepfaking and not a specific algorithm.

Furthermore, it would be possible for someone to use the detection models to train their deepfakes be harder to detect and make them even better.

I think eventually it will just be impossible to tell the difference, even for a computer.
 
From the article "The results showed that our approach performs substantially better than the random ground-truth baseline"

They're being pretty vague. I wonder what percentage they could actually identify.

Also, they require the original image of a deepfake image generated by the same algorithm to be able to do that. If it's a brand new deepfake image generated by an algorithm not yet seen it won't work. Although I believe this could also be trained to identify general deepfaking and not a specific algorithm.

Furthermore, it would be possible for someone to use the detection models to train their deepfakes be harder to detect and make them even better.

I think eventually it will just be impossible to tell the difference, even for a computer.
You could train it by providing it with various deepfakes and NOT providing the originals, and explicitly telling it they are fake, then provide some images that are not fake and tell it they are not. Then after a while it should be able to pick out the artifacts which give away the fakes fairly reliably (as long as they occur in a good percentage of fakes and are absent in most of the unaltered images).
 
This is a good thing, even though it will lead to progress going both ways. But we'll still have to deal with PEBKAC issue.

Deepfakes in movies: "Part of his body passed through a solid object, these special effects suck!"

Deepfakes in real life: "Part of his body passed through a solid object? You and your wacky theories!"
 
I can see a world coming soon where one can hire mercenaries to create Deepfakes to help murderers get off,or to frame people for felonies as the tech gets more sophisticated. Sci Fi will continue to become real life at warp speed.
 
Ever since deep fakes started being discussed I have thought that it would be a perfect application for machine learning.

You could have it scan a very large sample size of real and fake content to train it and then see if it is able to with confidence tell you which is which.

A lot of the AI/Machine learning being done is dumb and or borderline malicious, but these people are doing important work!

It's gonna do wonders for facial recognition if wearing prosthetics/masks/glasses too [insert tinfoil emoji here]
 
I can see a world coming soon where one can hire mercenaries to create Deepfakes to help murderers get off,or to frame people for felonies as the tech gets more sophisticated. Sci Fi will continue to become real life at warp speed.
That's why I'm going to invent a secure camera that takes secure verifiable pictures and video with an auditable trail. I'll make millions selling to security companies, until China steals my designs. Patent pending.
 
That's why I'm going to invent a secure camera that takes secure verifiable pictures and video with an auditable trail. I'll make millions selling to security companies, until China steals my designs. Patent pending.

I'm sure there is some way to use block chain for this. It will aid in raising funds :p
 
  • Like
Reactions: travm
like this
Back
Top