Creating art through a Convolutional Neural Network

Source: Deep Learning on Medium

According to an article from the New York Times, a portrait produced by Artificial Intelligence was hanging at Christie’s New York opposite an Andy Warhol print and beside a bronze work by Roy Lichtenstein. It sold for well over double the price realized by both those pieces combined. “Edmond de Belamy, from La Famille de Belamy” sold for $432,500 including fees, over 40 times Christie’s initial estimate of $7,000-$10,000. The buyer was an anonymous phone bidder.

Artificial Intelligence (AI) is getting more importance in different fields and it’s drastically changing the nature of creative processes. Did you imagine using AI to create art? This is happening right now! TCheck this music video using AI:

Style transfer is the technique of recomposing images in the style of other images. It all started when Gatys et al. published an awesome paper on how it was actually possible to transfer artistic style from one painting to another picture using Deep Learning, specifically Convolutional Neural Networks (CNN).

What is a Convolutional Neural Network?

In Deep Learning, a Convolutional Neural Network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. CNN is one of the most powerful networks for image classification and analysis.

They have other applications such image and video recognition, recommender systems, medical image analysis, natural language processing and style transfer.

Example of a CNN

Style transfer allows us to apply the style of one image to another image. The key to transfer a style to a content is using a trained CNN to separate the content from the style of an image.

Style transfer will look at the two different images. We often call these the style image and the content image.

Using a trained CNN, style transfer finds the style of an image and the content of the other.

We can use VGG, a pretrained CNN for style transfer. VGG stands for Visual Geometry Group. It’s a CNN model proposed by K. Simonyan and A. Zisserman from the University of Oxford in the paper “Very Deep Convolutional Networks for Large-Scale Image Recognition”.

VGG consists of 16 layers of convolution and ReLU non-linearity, separated by 5 pooling layers and ending in 3 fully connected layers. We can extract representations of the content and style of images from the various layers of this network.

VGG16 structure

These model will try to merge the two images to create a new third image, like shown below:

Here are some experiments I made, using VGG19 model. You can see the Style and the content images:

This is not Photoshop, this is artificial intelligence!

Have you heard about Deep Dream?

Deep Dream is another tool that uses style transfer. It’s a computer vision program created by Google engineer Alexander Mordvintsev which uses a CNN to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like hallucinogenic appearance in the deliberately over-processed images.

See these images that I generated using Deep Dream!