The Defense Department Has Produced the First Tools For Catching Deepfakes

The Defense Department Has Produced the First Tools For Catching Deepfakes
Fake video clips made with artificial intelligence can also be spotted using AI -- but this may be the beginning of an arms race. From a report: The first forensics tools for catching revenge porn and fake news created with AI have been developed through a program run by the US Defense Department. Forensics experts have rushed to find ways of detecting videos synthesized and manipulated using machine learning because the technology makes it far easier to create convincing fake videos that could be used to sow disinformation or harass people. The most common technique for generating fake videos involves using machine learning to swap one person's face onto another's. The resulting videos, known as "deepfakes," are simple to make, and can be surprisingly realistic. Further tweaks, made by a skilled video editor, can make them seem even more real. Video trickery involves using a machine-learning technique known as generative modeling, which lets a computer learn from real data before producing fake examples that are statistically similar. A recent twist on this involves having two neural networks, known as generative adversarial networks, work together to produce ever more convincing fakes. The tools for catching deepfakes were developed through a program -- run by the US Defense Advanced Research Projects Agency (DARPA) -- called Media Forensics. The program was created to automate existing forensics tools, but has recently turned its attention to AI-made forgery.





Share on Google+



Read more of this story at Slashdot.