One framework for machine learning is called a “generative adversarial network” or GAN. The technique involves pitting two neural networks against each other, a generative network that produces additions to a given dataset (such as a images of faces), trying to get them past the second, discriminative network that tries to spot the fakes. The result: better and better fakes.
This is great if you want to have decent video conferencing over slow connections, or if you want the computer to turn your bad monster sketches into beautiful artwork. But the malicious applications of undetectable fakes are endless.
- Designed to Deceive: Do These People Look Real to You? [New York Times] “Given the pace of improvement, it’s easy to imagine a not-so-distant future in which we are confronted with not just single portraits of fake people but whole collections of them — at a party with fake friends, hanging out with their fake dogs, holding their fake babies. It will become increasingly difficult to tell who is real online and who is a figment of a computer’s imagination.”
- Create your own fantastical creatures with Google’s ML-based Chimera Painter [Android Police] “Since imaginary monsters and creatures usually won’t wind up in front of cameras in the wild, the machine-learning algorithm behind the technology had to be trained with the help of game designers who created a few base monsters with two sets of textures — the outline with body part labels and actual character designs. The designers and Google chose the best looking results in each round and fed these to the machine, until there were enough samples and they were happy enough with how the monsters turned out.”
- Nvidia developed a radically different way to compress video calls [Ars Technica] “Nvidia says that its technique can reduce the bandwidth needs of video conferencing software by a factor of 10 compared to conventional compression techniques. It can also change how a person’s face is displayed […] or replace someone’s real face with an animated avatar.”
- US Senate Approves New Deepfake Bill [Infosecurity Magazine] “This bill directs the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST) to support research on generative adversarial networks. A generative adversarial network is a software system designed to be trained with authentic inputs (e.g. photographs) to generate similar, but artificial, outputs (e.g. deepfakes).”
From the Ohio Web Library:
- Kompella, Kashyap. “AI: The Good News and the Fake News.” EContent, vol. 42, no. 4, Oct. 2019, p. 30.
- Lu, Donna. “AI Turns a Convertible into a Hatchback.” New Scientist, vol. 246, no. 3279, Apr. 2020, p. 19.
- Chesney, Robert, and Danielle Citron. “Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics.” Foreign Affairs, vol. 98, no. 1, Jan. 2019, pp. 147–155.