DeepArt and the Future of Video



So I’m in love with DeepArt — the whole idea of machines doing art fascinates me but I love how DeepArt.io has brought this idea to the masses along with Google and Snapchat.

We often call some of the latest photo augmentation software “filters” but it’s so much more than that. It’s using machine learning and image recognition to not just mindlessly change an input to an output, but rather learning techniques and patterns from one image and applying them to objects in another. To the average person, this might seem like a small distinction, but it displays an entirely new level of intelligence and skill on the part of computers.

While unfortunately this technology isn’t open source, I still had fun playing with the API and finding my own possibilities for this software.

ArtCam

I started playing with functions to create a GUI that would allow for novel interactions with DeepArt. Using the system, the user could run a script, it would count down on the screen, take a picture from the webcam and then show that picture to a user. It was amazing to see the delight this brought to people. Some displayed the created pictures proudly as their profile pic, but it also enabled people to be goofy and creative.

While I was playing with this technology, I was taking a food and design class. I was able to play with the technology and crapes. It made me think of a future where the experience of eating is transformed. We know that vision plays a large part in our eating experience, and I’d honestly love to see a world where children eat broccoli art in augmented reality.

The Fusion of Animation and Live Action

To process video, I put video into individual images, then take those images, pass them through the API and restitch them as an image. This means that my current system has no sense of object permanence and thus the image can seem a little flickery. If the camera itself or lighting were to move, it’s likely the video itself would lose any artistic nature and be simply, a mess.

However, those at DeepArt are working to process images in a more intelligent way that is conscious of object permanence. This means that art styles could be applied to live action video, greatly reducing the time for animation. I don’t see the elimination of animation coming soon (or ever) but it’s an interesting augmentation to the tools of film producers.

Below is a video I made of making Pizza Rolls for my Food and Design class that was put through this process. I choose this simply because it was footage I had where the camera didn’t move and I had rights to the video.

Next Steps

This was made as a way to explore the possibilities of the software so it’s not fully accessible to the average user. In the future, (if I find the time) I hope to create a web application that would allow users to pick a style and then take a picture from their browser or more simply process video without the command line.

If you have any film that you’d like me to apply this technology to, please send it to katekuehl@gmail.com. I’d love to work together and create something more than just paper-animated pizza rolls. I’d also love to work with artists to display this technology in an interactive way with art lovers.

Source: Deep Learning on Medium