Why unimaginably intelligent machines are just around the corner?

Original article was published by Maciej Wolski on Artificial Intelligence on Medium


Yoshua Bengio in a presentation at NeurIPS 2019 conference showed that to progress we need to realize functions similar to both System 1 and System 2 in the brain. He compared them to “Current DL” and “Future DL” respectively.

Even if they are about completely opposite goals and ways of operation.

One is fast, intuitive and the other — slow, analytical.

Ask yourself — can you build the latter from the former? For me, they are like from different worlds.

Much more reasonable seems to be to start with a blank page, analyze what we have already available and what we need to invent.

And we have a large reservoir of tools available: from graphs, decision trees, causal reasoning algorithms to various types of neural networks, including some that were a little bit forgotten.

We also have many clues from neuroscience that are completely not considered in the current AI technology stack. My favorite are the existence of astrocytes that control the neurons but are completely skipped in the AI implementations and multiple subcortical components that allow autonomous operation, not only for us but also animals.

So if we know that the brain does so many different things and we have so many tools available — why not combine them and experiment with their different setups.

That is what I did in the past.

People like to discard things very quickly. If there was a theory of Hierarchical Temporal Memory (HTM) defined by Jeff Hawkins and Numenta researchers — they think that there is no point in further experimenting with architectures inspired by cortical columns in the brain. Even if Geoff Hinton’s capsules also are.

If the model-based learning overcame instance-based techniques — it doesn’t make sense to think seriously about the latter.

And so on.

But the truth is — the modular architecture of the Artificial Brain, composed of uniform learning mechanism based on repeated computational structures, but different feature extractors with various reasoning modes and a group of supporting modules — is probably the only way to realize the goal of advanced general intelligence.

Are you excited by GPT-3 and already waiting for the AGI emergence at one of the next iterations? Try to put it inside a robot and check how much it will learn about its environment after leaving it alone for some time…

Although interesting, the main thing GPT-3 proves is that to get great results in multiple dimensions, you need huge neural architectures.

Would not it be great to have such an enormous amount of neurons and parameters, but use only those that are useful for any given task?

That is what the human brain does to get its amazing efficiency and performance.

You heard about us using only a few percent of our brain at a single moment, right?

But how it knows which parts to pick? Well, there is a dedicated component for that — it is called the thalamus. It is a relay station from the senses to the cortex and between parts of the cortex.

And what is allowing us to make different choices or perform different actions when confronted with similar situations?

First of all, chemical signaling through hormones and neurotransmitters/neuromodulators.

It is as simple as that: you are passing by a restaurant with tasty food — on one day you feel hungry (hunger hormone) and on the other you don’t. That drives different decisions and actions.

I often hear that we are not able to mimic emotions and machines can’t be like us for many reasons. But it seems to me that the concept of machines not being creative has already been sufficiently proven wrong by all these painting/music/poem generators.

It is a similar fear about losing our perceived special position in the world — that kept people believing Earth was the center of the universe for so long.

Well, we are biological machines — living a significant part of our lives on autopilot. Believe it or not, but this is true. It is so common that people think afterward — why did I make such a decision, why did I say that. Because of your autopilot and chemical signaling making decisions for you, unless you will override them.

Fun fact: all spiritual traditions organized around meditation try to show it to you. That there is a very rich life happening under your skull without your conscious awareness. And the moment you fully realize the meaning of these words — you are awakened.

We can consciously override automatic reactions of our brain and body just as we can compute emotions, understanding their composition. And the most recognized emotions are various combinations of neurochemicals: mainly dopamine, noradrenaline and serotonin.

Ever read about research that rats conditioned to believe that electric shock is unavoidable don’t react to the opened door at the labyrinth? Well, blame the associated low level of dopamine activity.

What is the difference between picking a fight or flight option? The current level of noradrenaline, combined with low serotonin and high dopamine.

When realized that all these things are computable — emotions and other features of the mind — it leaves us with the question of what and how to compute?

At AGICortex we are working with 3D neural networks that are very different from their traditional counterparts.

In ANN you knew before — the computational unit is a single neuron. In our case it is a whole group of them — a mini neural network itself, that can be used alone or in combination with the others.

Can you imagine generating Machine Learning models on the fly from a big repository of available components? Impossible?

Not if you have a dedicated module that is aware of all the available units and their characteristics.

Would you like the possibility of real-time incremental learning without the effects of “catastrophic forgetting”? Impossible?

Not with a kind of external memory, deeply incorporated into the neural architecture.

Want to make your AI agent aware of time and past events?

Equip it with a dedicated module, inspired by the biological hippocampus.

And so on.

The intelligence starts at the lowest organizational level. The biological cell is able to sustain itself and adapt own operation to the state of the internal and external environment. Then cells build the specific organs and the whole organisms — from scratch. Again and again.

Maybe then – more complex, adaptable and more capable at the same time artificial neural units are able to push us towards more intelligent AI? Just as quite complex individual cells or cortical columns in the brain?

To sum up, there are two ways to proceed:

1) Believe that the world can be contained in a model. Collect datasets and train the models with backpropagation, making it harder to explain what is happening inside and unable to add new information effortlessly.

2) Builds self-learning AI with complex, modular neural architectures with repeated sets of highly capable computational units (this is how our Neocortex is built) — with high-level structure and low-level adaptation to get energy-efficiency, autonomous incremental learning and explainability.

Support them with a range of additional subcortical modules, providing additional possibilities as neural networks are just an associative memory between input and output and not enough to get us to the AGI level.

Then just turn the system on.

Welcome to the age of Artificial Brains.

And it is just the beginning…