From Newbie to Implementing a Real-Time Arbitrary Style Transfer Network in an Android App _ Part…

Source: Deep Learning on Medium

Go to the profile of Y B Kifouly

Day 0:

I have completed just a couple of days ago all the requirements of the PyTorch Scholarship Challenge from Facebook. I started the course aptly titled Introduction to Deep Learning With PyTorch about two months ago knowing absolutely nothing about Deep Learning.

I have learned a lot ever since and fallen head over heels in love with Convolutional Neural Networks. As a side project I was interested in exploring more in depth style transfer and that is exactly what I will be doing in this series of posts.

Wait, what is Style Transfer?

Style transfer is actually pretty much what it sounds like. It is applying the style — colors and texture — of one image to another. The image whose style is being used for the transfer is called the Style Image and the image the style is being transferred or applied to is called the Content Image.

How exactly is it done using Neural Networks?

Allow me to answer this a bit later. I the following posts I will actually be exploring different answers of this question given by different research teams as I read and try to implement the methods outlined in their papers. For now, just know that it’s done using the ever awesome Convolutional Neural Networks (CNNs).

The goal?

My goal is to build a style transfer model that I will integrate to an Android App with a great User Interface (UI) to build a bridge between this particular application of Deep Learning and the common man.

The plan?

As a real beginner in this field, I think I will learn more by taking it step by step. With this in mind, over the next few days and weeks (hopefully not months) I will be reading papers on the topic and implementing in codes the techniques they propose.

I will start with the very first paper to ever suggest the use of Convolutional Neural Network to achieve style transfer, then go through a few papers that tried to solve some issues that the first paper left pending and then the last paper we will look at will be the one that seems to have solved the ultimate problem related to this.

Once all that is done and I have written code corresponding to the methods in the last paper I will build an Android App that I will then integrate the model I would have built after the last paper to it.

I will be using the PyTorch framework to build my models.

Here is the plan then. Each of this action items will become a link as I write a article about it.

  1. Implementing the optimization method in the Gatys et al.’s paper.
  2. Implementing the feedforward method in the Johnson et al.’s paper.
  3. Implementing the Adaptive Instance Normalization paper by Huang et al.
  4. Implementing the method described by the team at Google Brain.
  5. Reflections about all these papers and techniques.
  6. Building the Android App.
  7. Hooking up the model to the Android App.

Let’s dive in! Wish me luck! May the gods of Deep Learning lighten my way!

Further Reading:

These are all the research papers mentioned above that I will be reading and working on over the coming days or weeks:

  • Here is the first paper to propose the use of Convolution Neural Networks for style transfer.
  • Here is the paper that pioneered the feedforward method and took a step closer towards arbitrary real-time style transfer.
  • Here is the paper that proposed the use of Adaptive Instance Normalization to achieve arbitrary neural artistic transfer.
  • Here is the paper by the team at Google Brain.
  • Here you will find a tutorial about transferring a model from PyTorch to Caffe2 and mobile.