- cross-posted to:
- technology@lemmit.online
- nev@lemmy.intai.tech
- cross-posted to:
- technology@lemmit.online
- nev@lemmy.intai.tech
Intel’s deepfake detector tested on real and fake videos::We tested Intel’s new tool, “FakeCatcher”, on videos of Donald Trump and Joe Biden - with mixed results.
Yes the entire process of training the models used in deepfakes requires building the detector in the first place… and then beating it. That’s what adversarial means. Keep training until you can’t distinguish any more.
Now it’s novel what they did here: they used a hypothetically orthogonal loss metric, something that the model’s discriminator isn’t actually looking at. The blood flow thingy. However, that could still be a latent variable in the real models, so they could be training to replicate it anyways. But apparently not because their detector worked when provided sufficient resolution.