Let’s fool a neural network!

by using Adversarial Patch

In this article will see how to fool an object classifier neural network, specifically VGG16, using a new technique called Adversarial Path.

The original paper can be found here and all the code and material here.

Introduction

Usually, the standard approach to fool an object classifier is to create images by iteratively adding some noise in a specific way trying to maximise the error of the model. The resulting picture looks very similar to humans but can lead the network a huge error. An example of this techniques can DeepFool, introduced in this paper. In a nutshell, it perturbates an image leaving it almost equal to human eyes, but different to a neural network

The original image is correctly classified as a “whale”, while the second image (created adding the two images together) is classified as a “turtle”.

While this approach works very well, it is not suitable for real-life scenario. Quoting from the

This strategy can produce a well camouflaged attack, but requires modifying the target image

The presented method to create Adversarial Path works by adding a path, a sticker, to an image instead of changing it completely.

Thus, one can just print the sticker and easily place it near a real object in order to fool a network. Specifically, the path is generated in order to make the model predict a wrong class, in our case the toaster. Also, this works wonderfully in a real-life scenario when, for example, the classifier uses a camera to perform live object detection.

I strongly suggest reading section 2 of the original paper to get all the detail of how the path is found.

Hands-on

Let’s actually try it. The paper provides a path, specifically created for the VGG16 (widely used object classifier model), that is made to make the classifier believe that there is a toaster in the image. I use this website in order to test if it works.

Header of the online demo of the VGG16 that I used

A more scientific way is to do that is to download a VGG16 trained model and plot the error for each image in the test set with the path applied on it but I did not have the time to do that.

Also, the path should not directly be applied on the object otherwise it covers the original object we are trying to classify.

To apply the path to an image you could use your favorite image editing software. I decided to be a little fancy and create a command line in node.js to do that. It allows me to specify the position and the size on the sticker on a target image, you can find the code here

> node index.js ../images/banana.jpg center up 2
Processing image: ../images/banana.jpg
Path will be placed center-up
Image saved at: ../images/banana-pathed.jpg

The first parameter is the source image path, the second and the third represent the position of the path, horizontal an vertical respectively and the last one is the ratio of the path to respect to the source image.

In this case, the script takes an image of a banana, applies the sticker on it and stores it, this is the output.

Let’s now try to see what VGG16 predicts with the original image and the “patched” one. We use this website in order to update and get the prediction of an image.

With the original picture:

Yep, it is a banana

With the patched one

Now its a toaster!

It works! Or, at least for banana. Let’s try another example:

node index.js ../images/bird.jpg left center 3
We fool it again!

On the original image, the model correctly predicts class “Jay” with 100% confidence, on the patched one if predicts “Toaster” with 67.1% confidence and “Jay” with 3.5%. Yep, we fool it again!

I suggest you to try it with your own images, it is very funny.

Conclusion

The presented method to create an adversarial path is a new way to effective fool neural network in a real-life scenario without modifying the target image. A path can be printed as a sticker and directly applied to an object in order to change the predicted class and “fool” the neural network and this can be very dangerous.

The original paper can be found here and all the code and material here.

Thank you for reading,

Francesco Saverio


Let’s fool a neural network! was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

Source: Deep Learning on Medium