Selfie2art — neural style transfer

Source: Deep Learning on Medium


Go to the profile of EmTech Europe

Written by Jeroen

Selfie (left) and rendered image (right) — top image is the piece of art used.

Selfie2art series

In late 2017 we received a request from a colleague on whether we could rebuild a showcase he had seen at an event. This showcase took a picture of you and transformed that photo into a painting that mimicked the style of a reference art piece.

We guessed that what the colleague had witnessed was an example of Neural Style Transfer (see wikipedia). A quick search led us to the project on Github by zhanghang1989 — PyTorch-Multi-Style-Transfer.

We essentially used that codebase to quickly build a demonstrator that used the camera_demo code in that project to enable the taking of a selfie (by taking a single frame from the webcam stream) and then rendering the composite image by running that through the process. Since we took a selfie with the webcam and rendered this as art, we called this project selfie2art.

Following these steps we were able to generate python code running PyTorch using CUDA on a Linux laptop with an NVIDIA graphics card. In an interactive setup this would render the image from a full HD webcam in less than 2 seconds. People would then take a photo of the screen to keep a record (not saving images leads to less GDPR impact).

Since the early days we have used this showcase at multiple events. It is a constantly running showcase in our space in Frankfurt. However, getting this installed on a laptop is not simple. Since it uses older versions of PyTorch and thus CUDA this is difficult to get installed on a modern Linux.

Issues with installing for our space in Istanbul has led to the development of a web-version. More about that in a future post.