Neurons are More Complex than What We Have Imagined!

Photo by Paul on Unsplash

One of the biggest misconceptions around is the idea that Deep Learning or Artificial Neural Networks (ANN) mimc biological neurons. At best, ANN mimic a cartoonish version of a 1957 model of a neuron. Neurons in Deep Learning are essentially mathematical functions that perform a similarity function of its inputs against it internal weights. The closer a match is made, the more likely an action is performed (i.e. not sending a signal to zero). There are exceptions to this model (see: Autoregressive networks) however it is general enough to include the perceptron, convolution networks and RNNs.

Jeff Hawkins of Numenta has always lamented that a more biologically inspired approach is needed. So, in his research on building cognitive machinery, he has architected system that more mimic the structure of the neo-cortex. Numenta’s model of a neuron is considerably more elaborate that the Deep Learning model of a neuron:

https://www.slideshare.net/numenta/realtime-streaming-data-analysis-with-htm

So the team at Numenta is banking on this approach in the hopes of creating something that is more capable than Deep Learning. Only time can be a judge as to whether they succeed or not. However, the current consensus is that Deep Learning (despite its model of a cartoon neuron) has been shown to be unexpectedly effective in performing all kinds of mind-boggling feats of cognition.

New experiments on the nature of neurons have however revealed that these neurons are even more complex than we have imagined them to be:

New Types of Experiments Reveal that a Neuron Functions as Multiple Independent Threshold Units

(1) A single neuron’s spike waveform typically varies as a function of the stimulation location.

(2) Spatial summation is absent for extracellular stimulations from different directions.

(3) Spatial summation and subtraction are not achieved when combining intra- and extra- cellular stimulations, as well as for nonlocal time interference, where the precise timings of the stimulations are irrelevant.

In short, there is a lot more going on inside a single neuron than the simple idea of integrate and spike. Neurons are not any more pure functions but rather they are stateful machines with behavior that we have yet to understand.

If you think this throws a monkey wrench on our understanding, there’s an even newer discovery that is even crazier:

Cells hack virus-like protein to communicate

Many of the extracellular vesicles released by neurons contain a gene called Arc, which helps neurons to build connections with one another. Mice engineered to lack Arc have problems forming long-term memories, and several human neurological disorders are linked to this gene.

What this research is reveals is that there is a way for neurons to communicate with each other by sending packages of code. This is packages of instructions and not packages of data. There is a profound difference when you send code as when you send data. What this implies is that behavior from one neuron can change the behavior of another neuron. Not through observation, but rather through injection of behavior.

So in reality, even at the smallest unit of our cognition, there is a kind of conversation that is going on between individual neurons that modifies their behavior. So not only are neurons machines with state, but neurons are machines with an instruction set and a way to send code to each other. I’m sorry, but this is totally wild.

So it’s now time to get back to the drawing board and begin to explore more complex kinds of neurons. The more complex kinds we’ve seen to date are the ones derived from LSTM. Here is the result of a brute force architectural search for LSTM like neurons:

It’s not clear why the looks so complex though.

There is a new paper that explores more complex hand engineered LSTMs:

[1801.10308v1] Nested LSTMs

That reveal measurable improvements over standard LSTMs:

In summary, a research plan that explores more complex kinds of neurons may in fact bear promising fruit. I predict in the near future that we shall see more aggressive research in this area. After all, nature is already unequivocally telling us that neurons are more complex that we have ever imagined!

Source: Deep Learning on Medium