The rise of Deepfake detection algorithms and digital signatures
You have probably heard about “Deepfakes” in the media with the proliferation of fake news, synthesized videos made to manipulate conversations or speeches, and the alteration of images that threaten the legitimacy of information that are presented online. Our collective belief of accepting videos and photos to be reliable records are no longer viable.
Deepfakes entered into public consciousness in 2017 when an augmented video of former president Barak Obama emerged in social media. The video was convincing, however, fake and potentially damaging.
Deepfakes are a product of a type of Artificial Intelligences called Generative Adversarial Networks or GANs. In 2019, we saw over 2000 different types of GANs that were publicly available as research materials and open-source code, making the technology readily available to everyone. These technologies, when implemented with malicious intent, may affect the quality of public discourse and the safeguarding of human rights.
Although Deepfakes, as the word describes it, may seem malicious, there are benefits of these technologies when done responsibly. For example, GANs an help media and fashion companies correct images of photoshoots (e.g., fixing a closed eye, a wrong facial expression, or and in-correct human pose). In Fashion, we see GANs used in the creation process of design via sketch-to-image transfer and machine-generated fashion designs (i.e., what we call Creative Machines). We also see the rise of licensing in the use of avatars of a model or celebrity in lieu of an on-location photo or video shoot. Other examples include domain-transfers where background environments of models can be changed to reflect any season or weather conditions.
With the ease of use and commodification of GANs, malicious Deepfakes will continue to rise, especially as we head into the next presidential elections in the US. These capabilities of fake media creation in videos, photos or speeches are an emerging threat to digital communication online affecting news organizations, brands, and social media platforms.
According to CNN, in 2019, there were over 14,000 deepfake videos found online, an 84% rise over a year.
Deepfakes are here to stay, and their impact is already being felt everywhere. Lately, we have seen a multi-organization attempt to solve this problem. One such initiative is the DeepFake Detection Challenge (DFDC) that is lead by Microsoft, Amazon, Facebook, and The Partnership on AI Coalition. This initiative is meant to crowdsource a problematic solution of identifying DeepFakes.
Trending in 2020 will be the rise of Deepfake detection algorithms and digital signatures or watermarks (perhaps with blockchain) to countermeasure fake videos and images that are posted online. In 2020, we predict a rise in DeepFake detection and verification startups and VC funding to develop a range of technology solutions to protect organizations, brands, and individuals. This prediction is based on our analysis of deepfakes and the synthetic media landscapes.