From Printing to Painting:
The Emergence of Computationally Creative Robots
My early painting robots were simple. They would dip a brush in paint and then drag the brush from point to point. They drew lines and filled areas in with color, like the connect-the-dot and paint-by-numbers exercises we did as kids. Their paintings were charming and in the beginning I had fun exploring robot themed art painted by a robot.
I showed off some of our paintings to a friend and he got excited and told me that he had the perfect name for my new invention. He told me I should call it “The Printer.”
His joke bothered me because he was absolutely right. While I had fancied myself the creator of a painting robot, it did little more than operate like an inefficient plotter printer, a bad plotter printer. Did I mention that it broke constantly and made a horrible mess with each painting? I am not even sure calling it a printer was fair to printers.
But the idea that it was just a printer stuck with me and I have been spending more than a decade obsessed with making my painting robots better than an ordinary printer. My robots had to be painters, and even more than that, they had to paint with artistic style.
One of the first things I improved was to install cameras so they could watch themselves work. I added them after multiple portraits failed like this one. In it you can see where the brush fell off as it was filling the background in with black. But instead of realizing this, the robot just went through the motions of painting for several hours never entirely completing the background. It was like how your printer runs out of an ink, but keeps printing anyway. This had to be corrected and the only way I could think to do so was to give it eyes, so I did. Then I programmed the robot to watch its progress. My printer now got feedback on its progress and would change how it went about painting based on the feedback. To me, this made it more than a printer.
The fact that my robots could react to how they were painting added a whole new layer of complexity to the paintings. Problem was that now that they could react, I had to teach them how to react. This lead to many years of learning about and implementing various AI algorithms. One of the first I implemented, and still one of my favorites, was k-means clustering. I started using k-means clustering to teach my robots to see colors in terms of both their hue and location in the painting (R, G, B, X, & Y). This gave them a painterly disposition that lended itself to mixing colors more effectively. There were a number implementations like this, where I had a task that needed optimization and I found a proven AI algorithm to achieve the task.
I also began experimenting with more ambitious AI, such as some early attempts at artificial creativity. I found that by using facial recognition algorithms, such as viola-jones, my robots could be aware of at least some of the content they were painting. With this contextual understanding, the robots could generate their own unique compositions, while also sticking to a painting’s main theme, in this case a face. I had a lot of fun exploring abstract portraiture with a focus on being as creative as possible while also maintaining a likeness to the subject being painted.
At this point my painting robots were far more than printers. But something was still missing. The more capabilities I added, the more I realized that the benchmark of comparing them to a printer was far too low. So I raised my goals. I switched from trying to make them better than a printer, to trying to make them better than me.
To achieve this, I devised a way to teach them to learn from human artists. I built an interface that let me control my robot’s paint brushes by swiping my finger across a touchscreen.
Then I would paint along with my robots. At the same time I had programmed my robots to pay attention to what I was doing and follow my lead. In essence, I was teaching them to imitate me.
This project soon grew well beyond my own art when I teamed up with a friend to open this interface up to the internet. We made it so that hundreds could simultaneous paint with my robots in a project called CrowdPainter.
Our work on this was shortlisted in Google’s Dev-Art competition though nothing was as rewarding as the insanely interesting art that resulted when hundreds of anonymous users simultaneously tried to paint with my machine. This occurred around the same time that twitch-plays-pokemon was popular and it had a similar effect. Each painting was a crowd fighting against each other for control, sometimes to beautiful effect, but more often to absolute disaster.
While the CrowdPainter project was not AI related, it was doing something important that I didn’t realize at the time. It was collecting brush stroke data. Millions upon millions of strokes from human participants around the world were being captured and stored in my databases. I started using this data to help train my robots to paint more naturally. This was done mostly by imitation, but it was the moment that my robots found their artistic style.
Our paintings were now far from printouts. No two ever came out the same. Furthermore with as much AI that went into each, a butterfly effect was occuring where small deviations at the beginning of a painting would cascade into bigger and bigger changes by the time the painting was complete. I couldn’t help but feel this was similar to my own creative process where each brushstroke depended not only on where I was trying to get, but the artistic effects of all previous brushstrokes.
I was pleased with where I had gotten with my robots. While they were not creative in their own right, they were an amazing tool for my own art and very good at following my artistic direction. I had trained multiple robotic painting assistants and I loved the work that they were doing for me.
I continued using my robot as assistants for a number of years, convinced the AI couldn’t get much more creative. I was of the belief that while AI did cool things, creativity was uniquely human. Therefore AI would never be more than just a tool for us to use. Then I heard about Alpha Go and read reports about how some of its moves when it beat Lee Sedol a tGo were being described as “creative.” What did this mean?
I started looking into how AlphaGo worked and found deep learning, which I soon realized were just complex neural networks. But this came with the realization that these complex neural networks were finally becoming powerful enough to be useful. The more I looked around, the more I found including some remarkably interesting work being done with Convolutional Neural Networks (CNN) and Style Transfer. I set out to learn how to make CNNs with TensorFlow and incorporated them into the process being used by my robots. The results were dramatic, and appeared to be creative. As I learned more and more about deep learning and experimenting with it to improve my robots, I began questioning my belief that only humans could be creative.
I am not the only one surprised by the results. New York Art Critic Jerry Saltz recently reviewed one of my CNN assisted portraits and I was pleased to hear him say “It doesn’t look like a computer made it.” Nevermind the fact that the next thing he said was “That doesn’t make it any good.” The portrait looked creative enough to him that he would not have known it was generated by AI had he not been told so.
Looking like it is creative and being creative are not the same thing. For example, it was me that decided to do the portrait of Elle Reeve. It was me that fed several photos of her into my algorithms that then picked a favorite, cropped and edited it, and painted it on a stretched canvas. It all began with photographs that I selected and a subject that I chose. This limits the creative potential of any painting made as part of a process.
One of my favorite robot artists, Harold Cohen, once complained that there were two types of representational AI artists. Those that worked from photographs, and those that lied about not working from photographs. His point was that unless the machine was coming up with its own imagery, it wasn’t really being creative, it was just a photo filter. I agreed with him. As long as I was giving my robots an image, I am the art director and it is my assistant.
I held onto this view until just recently when I discovered a relatively new type of neural network called a Generative Adversarial Networks or GAN. I once again turned to TensorFlow to create a face generating GAN and then incorporated it into the creative process of my painting robots. As I saw my robots imagine, create, and pull faces out of random noise, I realized that my robots no longer needed to work from photographs. With GANs they could now imagine unique faces and pull them out of the darkness. In fact, all they needed was the right data and they could imagine anything.
This has all lead to my most recent portrait series called Ghostfaces. These were painted by my most recent painting robot project, CloudPainter.
These paintings represents years of exploring artificial creativity and trying to do everything I could to differentiate my painting robots from printers. Each of the 32 paintings in this image were imagined and painted with a wide variety of AI and feedback loops. My robots imagined the faces from nothing. They then interpreted the faces in the style of multiple artists, both living and dead, including myself. And finally, it began painting them with feedback loops paying attention to each stroke and constantly making adjustments as needed.
Don’t look like faces to you? Give my robots a break. They are new to this whole imagination thing.
Pindar Van Arman
Source: Deep Learning on Medium