The system starts with a lengthy "meta-learning stage" in which it watches lots of videos to learn how human faces move. It then applies what it's learned to a single still or a small handful of pics to produce a reasonably realistic video clip. Unlike a true deepfake video, the results from a single or small number of images fudge when reproducing fine details. For example, a fake of Marilyn Monroe in the Samsung lab's demo video missed the icon's famous mole, according to Siwei Lyu, a computer science professor at the University at Albany in New York who specializes in media forensics and machine learning. It also means the synthesized videos tend to retain some semblance of whoever played the role of the digital puppet. That's why each of the moving Mona Lisa faces looks like a slightly different person. [...] The glitches in the fake videos made with Samsung's new approach may be clear and obvious. But they'll be cold comfort to anybody who ends up in a deepfake generated from that one smiling photo posted to Facebook.
Read more of this story at Slashdot.