Original article was published by Johannes Stelzer on Artificial Intelligence on Medium
The idea was straightforward: create an interactive artwork that highlights Baden-Württemberg and it’s Cyber Valley Initiative as the epicenter for artificial intelligence on our continent. To do this properly, we had to think big and act fast, as the exhibition would already start in a mere five weeks. Summer plans? Goodbye! Free time? Maybe in winter. Too tempting to build something monumental to be set out in the public space.
Put simply, our plan involved capturing the scene at the front, feeding it to a supercomputer and teleporting the results into a gigantic display. Thereby, the work shows an altered mirror image of reality, all created by a live-dreaming AI. We wanted the observer to dive in and become part of an art piece that is generated by machine intelligence and live-streamed to the internet.
Skip all prototypes!
The idea born, all that remained now was to organize the hardware, write the software, to find a suitable name for the artwork and to throw everything over board that we knew about project management, due to time reasons. Name-wise, we decided for “Grenzauflösung”, translating to “dissolving borders”. It reflects both on the German reunion, while also implying dissolving borders between us and an ever-faster-evolving AI. Finding sponsors for the missing hardware was a bit more challenging, as the summer break and the ongoing corona pandemic didn’t make it any easier. However we got lucky, as LG Electronics’ Information Display business unit was ready to provide us with a beautiful array of their digital signage displays, which fit the special requirements of the exhibit’s environment, to form one giant video wall, spanning more than 200 inch (5.5 meters). NVIDIA, the leading computing company for AI, made available a DGX Station powered by 4x NVIDIA V100 GPUs. We were also happy to receive an excellent wide-angle camera from the leading producer of industry cameras, IDS Imaging. To bind everything together and help with all organizational and functional as well as technical aspects, we could rely on our strategic partnership with Bechtle AG, Germany’s top-notch IT system integrator. All that was lacking now was the AI software to make the dream come true.
How does it work?
Our traditional weapon of choice for turning ordinary pictures into reasonable artworks has been Neural Style Transfer. Possibly because this wizardry technology indeed has been conceived in our hometown Tübingen by Leon A. Gatys. In short, the idea is to recycle a pre-trained object detection network. The network has taught itself to see and categorize our visual world, via the magic process of optimization and has been subjected to millions of images. Over the course of training, the network implicitly understood what makes a cat a cat — and how to distinguish her from a dog or boat. However, when using the input image as drawing canvas, the object detection networks becomes the artistic itself. Perception becomes imagination. Pixels of the input image are adjusted, so they collectively produce specific patterns in the high-level representation of the object detection network. Crucially, it tries to mimic the patterns and relationship evoked from another image, the style image, by weaving in it’s structures and elements. Watching this process is mesmerizing and often reminds of crystallization. Below you can see an example, where we optimized the castle of Stuttgart to reflect properties of Kanoldt’s painting “Olevano”.