Are We Building AI systems that Learned to Lie to Us?

Original article can be found here (source): Artificial Intelligence on Medium

Are We Building AI systems that Learned to Lie to Us?

DeepFakes = DeepLearning + Fake

Illustrated by Shelly Palmer

I have been hearing about concerns over deepfakes in recent years. Facebook is teaming up with Microsoft, the Partnership on AI coalition and academics from several universities to launch a contest (from late 2019 to spring of 2020) to better detect deepfakes. The social media giant spends $10 million on this contest.

What are DeepFakes?

The term deepfakes — a combination of the terms “deep learning” and “fake”, a form of artificial intelligence and originated around the end of 2017 from a Reddit user named “deepfakes”. He shared many videos involved celebrities’ faces swapped onto the bodies of actresses in pornographic videos while non-pornographic content included many videos with actor Nicolas Cage’s face swapped into various movies.

Nicolas Cage Can Now Be Put Into Any Movie in History

The danger of this technique is “the technology can be used to make people believe something is real when it is not. It can be used to weaken the reputation of a political candidate by making the candidate appear to say or do things that never actually occurred.

What is powering the tech behind DeepFakes?

In 2016, Ian Goodfellow of Google Brain presented a tutorial entitled “Generative Adversarial Nets¹” to the delegates of the Neural Information Processing Systems (NIPS) conference in Barcelona. GANs are generative models; this technique has enabled computers to generate realistic data by using not one, but two, separate neural networks.

GANS help us to build an AI system that learned to lie to us.

GANs were not the first computer algorithm used to generate data, but their results and versatility set them apart from all the rest such as the ability to generate fake images with real like quality.

Increasingly realistic synthetic faces generated by variations on Generative Adversarial Networks (GANs). In order, the images are from papers by Goodfellow et al. (2014), Radford et al. (2015), Liu and Tuzel (2016), and Karras et al. (2017)³

GANs is one of MIT Technology Review’s 35 Innovators Under 35 for 2017.

How GANS work?

It consists of two simultaneously trained models:

  • The generator: trained to generate fake data.
  • The discriminator: trained to discern the fake data from real examples.
the objectives of generator and discriminator

GANs introduced a competition between a generator and a discriminator. They perform a min-max game together, where the generator’s objective is to generate data which can fool the discriminator and the discriminators’ objective is to accurately distinguish the generated data from real.

In the not so distant future, AI tools will help us to make all types of decisions and it also learned to lies to us. Don’t worry because we also can use AI to detect manipulated contents.

How should we use GANs to build useful apps?

GANs help to improve diagnostic accuracy

Unlike datasets of handwritten letters for optical character recognition which anyone can procure, examples of medical conditions are harder to come by, and they often require specialized equipment to collect. Using GANs produce synthetic examples that can improve classification accuracy beyond what is possible with standard dataset augmentation strategies.

Using GANs to design predictive fashion

Amazon has developed an AI fashion designer⁴ by using no other techniques than GANs. Researchers from Adobe and the University of California, San Diego, published a paper⁵ in which they set out to accomplish the same goal. It implies that AI might just spawn a whole new style trend: call it “predictive fashion”.

We can use GANs to create new fashions and alter existing fashions to better match someone’s personal style. Adding GANs to recommender systems could help online retailers figure out what customers want beyond the items that already exist.


[1] Ian Goodfellow, 2014, Google Scholar. Generative Adversarial Nets,

[2] Nick Bostrom, 2016, Oxford University Press. Superintelligence: Paths, Dangers, Strategies.

[3] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, 2018. Malicious AI Report.