DeepFake Ransomware, OaaS Part 1

Source: Deep Learning on Medium

Go to the profile of Paul Andrei

What happens when you automate harassment?

Obliteration as a Service

OaaS is an article series designed to explore the realistic threats of Artificial Intelligence, with a focus on building viable defenses against them. Across the series, we will explore how existing tools and techniques can be used in exceedingly disturbing ways in order to pursue dark agendas. Each article provides a digestible description of the technical context, a blueprint of the unsettling system, and an outline of possible defense mechanisms against it.

For more details about OaaS, including the complete list of articles, visit this link.

The Context

Ransomware is a type of malicious software that blocks access to the victim’s files and threatens to permanently delete them unless a ransom is paid.

Malicious software comes in many shapes and sizes. Most pieces of malware only act after they somehow get installed on a victim’s machine, be it a laptop, a phone, or something else. There is malware written to target specific operating systems, specific devices, and even specific organizations.

People don’t usually write and distribute malware without a specific financial, ideological, or personal reason. Spyware is specially created to eavesdrop on one’s digital activity, Adware is built to increase traffic to specific ads, while Rootkits are designed to stealthily control your machine as part of a botnet army in order to wreak havoc on various online services.

A particularly annoying type of malware which inflicted massive amounts of damage last year was Ransomware. Once it gets on your machine, this damned strain of malware renders all your files unusable by changing the ones and zeros which comprise them. After this encryption is completed, the victim is offered the chance to pay to have the files restored in their original state, by changing the ones and zeros back to their initial configuration and essentially decrypting them with a secret key.

Source: Tom’s Guide

Once the victim’s machine gets infected, it’s practically held hostage and can be set free only if the victim pays a certain amount of money. Sometimes the attacker won’t even restore the files after being paid, leaving the victim completely helpless.

DeepFake is a technique used to combine images and videos, resulting in a fake video that shows a person performing an action that never occurred in reality.

Moving on to the next piece of the puzzle, you have probably used modern image filters on apps like Snapchat, Instagram, or Facebook at least several times, maybe just out of curiosity. Most of them use Face Detection algorithms in order to track several key features of your face, like your lips, your eyes, or your cheeks. All these algorithms have been previously trained with thousands of images which were manually labeled in terms of key facial landmarks.

Some of these image filters allow a pair of two users to digitally swap faces with each other. This works by mapping your face onto the other person’s face by perfectly aligning the previously identified facial landmarks. More on that here.

Image result for face detection landmarks
Source: Hacker Noon

In order to more realistically project someone’s face onto somebody else’s face, researchers have turned to Generative Adversarial Networks. GANs comprise a revolutionary algorithm architecture, being first introduced by Ian Goodfellow in 2014. This approach consists of two distinct components, a Generator and a Discriminator, competing with each other.

A helpful analogy is the story of the counterfeiting criminal and the forensics scientist. The criminal wants to forge banknotes so well that the scientist won’t be able to discriminate them from real ones, while the scientist wants to get so good at identifying fake banknotes that the criminal won’t be able to trick him. After extensive escalation, the criminal becomes able to create extremely realistic banknotes from scratch.

Nicholas Cage DeepFake sample

The same continuously improving forgery is happening algorithmically with GANs. Instead of banknotes, researchers have successfully forged fake videos that show a person performing an action that never occurred in reality, by automatically stitching together someone’s face on the person in the video, so that it looks indistinguishable from a real video.

The Threat

DeepFake technology has not only allowed people to swap faces for the fun of it. The technique has been repeatedly used to generate fake intimate content, impersonate politicians so that they held extremist speeches, and incriminate people of certain offenses which never actually happened. Digital content forgery can bring about huge amounts of damage, both financial and psychological, in countless ways. It only takes several images or videos with faces and some awful video content.

DeepFake sample

In this section, I am going to describe a pipeline which combines certain aspects of DeepFake technology with the philosophy behind Ransomware, potentially putting together a quite worrying system based on publicly accessible software.

Most of us use social media, be it Facebook, Twitter, Instagram, or some other time-eating dopamine-secreting platform which enables us to interact with our friends digitally, usually through rich media content. A large part of that media library is composed of selfies, group photos, and other images and videos which contain faces. To make matters worse, a significant fraction of social media users have public profiles, meaning that if a stranger searches for them they can see exactly the same things that their closest friends see.

Therefore, it doesn’t take advanced skills to create a script which can automatically crawl across public profiles on various social media platforms and scrape media content. When you are browsing such a platform, you are interacting digitally with some code behind it: scroll a bit, click on this button, download this photo. The only thing that a programmer has to do in order automatically download content is to simulate user actions algorithmically: scroll a bit, click on this button, download this photo. The platforms won’t (usually) know whether they are interacting with a human or with a script. A popular browser automation framework is Selenium.

Image result for scraping
Source: JetRuby

Even more, most social media sites provide an Application Programming Interface (API), a piece of software specifically designed to facilitate programmatic access to the personal profiles and stored content. By connecting to an API, a programmer can obtain images and videos without the hustle of automating user behavior on a browser, by simply making a web request. Facebook’s offering, for example, is called the Graph API.

After cleaning the large number of images through Face Detection, another script can start training a Generative Adversarial Network pair in order to map the previously collected faces onto some predefined incriminatory videos, thus automatically generating a fake video that shows the person whose images have been collected off social media performing an action that never really occurred.

The last piece of the disturbing puzzle is the characteristic inspired by Ransomware. After the fake video has been generated, it can automatically be sent to the victim on the previously exploited channels of social media. This time, the victim is offered the opportunity to permanently delete the respective files, instead of restoring them, in return for a ransom, probably transferred as cryptocurrency.

Probably the biggest problem with this system is that it can scale. The process of scraping images, generating video, and demanding ransoms can be fully automated. Additionally, unlike traditional Ransomware, there is no cybersecurity vulnerability which has to be exploited in order to perform the attack, everything is happening over regular information channels. A bot will probably be the one communicating with the victim over social media.

Source: gumtree

The lovechild of Ransomware and DeepFake, let’s call it Fakeware, borrows certain traits from each, creating a dangerous pipeline which can generate substantial financial and psychological damage at scale, without having to hack into any machine.

Fakeware is a type of malicious software that automatically generates fake video which shows the victim performing an incriminatory or intimate action and threatens to distribute it unless a ransom is paid.

The only resource needed on the side of the attackers is computing power for generating the fake content. Unfortunately, a short several-second video will probably be more than enough to trigger the victim’s panic and pay the ransom. If that wasn’t enough, the popped cryptocurrency bubble has equipped people with tons of spare GPUs, waiting to return investment.

The Defense

As a potential victim, the easiest way to defend against such a Fakeware attack is by changing your privacy settings on social media. Tweak the options such that your profile isn’t available publicly, being only accessible to your close circle of friends. Take a look at the posts which you have already shared and manage their availability settings. I encourage you to do that now, due to many other reasons besides this speculative article.

Take a Data Detox. Social media is definitely not the only place where bad actors can find images and videos in which you appear. Assess your online data signature by searching for your media content in other publicly accessible sources, like search engines. Take down what you deem necessary, by deleting the resources yourself or by contacting other entities which maintain them.

Image result for data detox
Source: Medium

Going back to the counterfeit banknotes analogy, how are societies all over the world coping with that issue? They are still training highly-skilled forensics scientists which can identify a large part of the fake items, not all of them, but a significant amount of them. We should be actively researching new techniques for discriminating fake content from authentic material, despite the fact that the algorithms generating fake content have the explicit objective of making that as difficult as possible.

Additionally, if this becomes a widespread threat, the social media platforms that provide the information transfer which makes the attack possible should take responsibility and deploy modern systems for mitigating this threats. This would probably happen with specialized teams of data scientists and machine learning engineers which continuously advance and implement the current state-of-the-art in terms of media forensics, keeping the users as safe as possible.

Further Reading

I deeply recommend listening to this 30-minute episode from Mozilla’s IRL podcast, which explores other ways in which your face is valuable in our digitized society.

Maybe some humor after all these dystopian endeavors:

For more details about Obliteration as a Service, including the complete list of articles, visit this link.