A New Brain-inspired Intelligent System Drives a Car Using Only 19 Control Neurons!

Original article was published by Louis (What’s AI) Bouchard on Artificial Intelligence on Medium


IST Austria And MIT’s New Intelligent System — NCPs

Let’s now enter a little deeper into how this new system works.
It consists of two parts. At first, there’s a compact convolutional neural network, which is used to extract structural features from the pixels of the input images. Using such information, the network decides which part of the image is important or interesting and passes only this part of the images to the second system.

End-to-end representation of the architecture — Image from the paper

Which they called a “control system” that steers the vehicle using decisions made by a set of biologically-inspired neurons. This control part is also called neural circuit policy, or NCP. Basically, it translates the data from the compact convolutional model outputs to only 19 neurons in an RNN structure inspired by the nematode’s nervous system controlling the vehicle and allowing it to stay into the lanes. Following the architecture shown above. You can find more details about the implementations of these NCPs networks in their paper or clear guide they made on their GitHub [2].

This is where the biggest reduction in parameters happens. Mathias Lechner explains that “NCPs are up to 3 orders of magnitude smaller than what would have been possible with previous state-of-the-art models” as you can see in table 2 shown below. Both of these systems are trained simultaneously and work together to create this self-driving car.

Network size comparison — Image from the paper

Being so small, they were able to see where the system was focusing its attention on the images fed. They discovered that having such a small network extracting the most important part of the picture made the few decision neurons focus exclusively on the curbside and the horizon. Which is a unique behavior among artificial intelligence systems that are currently analyzing every single detail of an image, using way too much information than needed.

Global network dynamics — Image from the paper

Just take a second to look at how little information is sent into the NCP network compared to other types of networks. Just by looking at this image, we can see that it is clearly more efficient and faster to compute than current approaches.

Plus, while noise is a big problem for current approaches, such as rain or snow in lane-keeping applications, their NCP system demonstrated strong resistance to input artifacts because of its architecture and novel neural model keeping their attention on the road horizon even if the input camera is noisy, as you can see in the short video below.

Robustness demonstrated in a noisy environment — Image from the paper