Source: Deep Learning on Medium
Neuroevolution of augmenting topologies, NEAT. Kenneth O. Stanley & Risto Miikkulainen (2002). “Evolving Neural Networks Through Augmenting Topologies”
It’s more than a mouthful and spellcheck doesn’t recognize it as English, but I’ve gone through every word of this research paper from 17 years ago. Sometimes patiently dedicating 12 hour days debugging on Google Colab. Other times walking away in frustration for almost a week while considering a life in fast food instead. Why even bother with the highly mathematical and generally obtuse prospect? What do I have to show for my efforts aside from red eyes and 600 lines of still obtuse Python code that balances two simulated poles on a cart?
Why try implementing a research paper at all?
I’ve been doing deep learning in Python for only a year. I heard about Tensorflow by chance. That day started learning, and every single day since in fact. I have a string of red ‘X’s on my calendar to prove it. I’ve done home brew curriculum, taken free moocs, and followed hundreds of tutorials. I’ve done dozens of projects from scratch in Pytorch and Tensorflow. I’ve started freelancing and co-founded a start-up. Without a PhD background it has been tough comparing myself to others in the field. I don’t have a decade and small loan of a million dollars to support my family while I get a piece of paper. If I can do the work I’ll do it. I’m a deep learning developer, because it’s what I do. Not, because someone gave me permission. I decided to face the dragon and implement my first research paper. I figured at the cutting edge of the field whether for my own start-up or for anyone who would hire me I would be implementing state of the art techniques from research papers and bending them to my will with my own data sets.
I’m a deep learning developer, because it’s what I do. Not, because someone gave me permission.
If you haven’t been paying attention to arXiv, then it might seem strange to dig up an old paper. Time and again the best performing new neural networks are just assembling pieces of research from decades ago with more compute and data. AlexNet changed the game in 2012 kicking off the latest AI summer, but was just a very good variant on CNNs dating back to the 1980s. Old is the new new.
There has also be a steady stream of “AI Learns to Play Mario Brothers” and the like in my YouTube feed for several years. Digging a little deeper it seems many of these are using a NEAT base algorithm. Obviously it’s my turn to (play games with an AI) create a new state of the art neural net using the NEAT who’s time has come. It’s going to be (wicked NEATo Mr. Wizard) very fast at evolving and training nets on data that’s difficult for contemporary neural networks.
What is NEAT?
It’s actually black magic. Rituals with candles and sacrifices under a full moon. Be sure to use real silver though or your model will never converge to an optimal solution, I learned that the hard way.
It’s evolving from the simplest possible neural network into the minimal one custom designed to optimize behavior in a dynamic environment. NEAT can quickly master a game, a simulation, or even controlling a self-driving vehicle. The way you get from the bare bones to the minimal viable network is by generating a population of the smallest unmodified parts, just nodes connected directly to the output with initial weights of 1. Then change the weights and add nodes and connections by mutating and breeding this population. At each generation divide the population into species based on their neural network structure and weights. Keep the highest performers of each species unchanged, and drop out the lowest performers. The species each protect new mutations until they can adapt and possibly outperform other species, rather than destroying them in a competition with every member of the generation. The cornerstone of NEAT is that each node or connection generated has a unique historical marker. It makes comparing members and tracking evolution mathematically very easy, it’s the reason all this works. Trying to compare the complex geometry of two neural networks is very difficult, but with historical markers you can just line up nodes and connections and compare. Black magic is great.
An honest assessment
A project I thought would just take a few days of focused work quickly shifted scales from one week to a month to I have no idea what I’m doing with my life. Three months in and I’ve gotten it working very close to the original. My implementation has a short list of differences. Some deliberate, some mysterious, and some due to a lack of clarity in the research paper. Lets go over some examples. I chose to give weights to the nodes and connections, instead of just the nodes. This lowers the need for more nodes and connections. I feel that the restriction was unnecessary. I also used no specialized bias node, there was one of these in the original. It just seemed unnecessary so I left it out. There were also the questions I had that were never clearly answered in the original. Like how many of the lowest performing members of each generation to remove before mating. I interpreted it as one per species.
My NEAT implementation takes only a few minutes to run the 3 generations it takes to find a solution. The winning result is a neural network with only 4 input nodes connected directly to 1 output node. The weights are almost all evolved away from their initial value of 1. The final network must balance poles for 200 steps 500 times without dropping a single pole or sliding off the ends of the track. I’m sure there are better results out there both with NEAT and with other models. This wasn’t my goal though. I’ve done Cart-pole with Deep-Q networks, random choices, and Monte Carlo Policy. NEAT so far is my only implementation that outperforms random choices.
The bigger picture
Maybe I wont be using NEAT to build a predictor for my business, but it could be part of a larger ensemble of techniques. Reinforcement learning problems are complicated and modern machine learning techniques don’t yet have perfect answers to any of them. NEAT will be in my toolbox for awhile, but more important is the experience decrypting a research paper into executable code. It’s harder then you think, at least the first time. In the end I have a much better understanding of the math and the hundreds of small nuanced decisions about how to structure the algorithms.
After several hundred hours over the course of three months. The code can easily be found on Github. A few hours reading the code base can adapt it to many problems. The deeper understanding of the algebra and algorithms would have been nice in high school, but I’m trying to start a new career right now. Fast and working is better for me today then slow and educational. Next time I’ll just cut and paste the code and skim the research paper. If I’m going to spend three months solving a problem again, I’m going to get paid for the results, not just write an article about them.