Why hybrid AI is evil

Source: Deep Learning on Medium

Wandering in the desert for 40 years. Picture via thebricktestament.com

The connectionism vs symbolism seesaw naturally leads to the idea of hybrid AI: adding a symbolic layer on top of some deep learning to get the best from both worlds. And it definitely can work in specific practical applications. Why wouldn’t it if every part produces some useful results and weaknesses of each approach are more or less opposite to each other?

However, if you are going in this direction, you should understand that it’s no more than using crutches and prosthetics: an artificial part is better than nothing, and in some narrow cases, it can be even better, but it’s not completely integrated with the body, it’s doesn’t have sensitivity the brain tries to get from it, it’s not self-healing, etc. Hopefully, in the future, we will overcome all these limitations, and we will use them to improve, not to replace missing body parts, so they won’t be prostheses anymore, but that’s another topic.

To be precise, the situation is even worse: a prosthesis is an addition to a leaving body, but hybrid AI is an attempt to build something solely from prostheses and crutches.

What is an alternative? To build it right, end to end, and you won’t need any prostheses to make it work. And it doesn’t matter in general from which side to start.

The very idea of different AI for fast and slow processing, or System 1 and System 2, is unhealthy because our brain doesn’t have two separate parts for this, it’s just two modes of using the same representations in one system.

Every time someone says that symbolic AI is not suitable for row images or that deep learning needs and extension/improvement to deal with reasoning and common sense, it’s just a confession of wrong approach or wrong implementation. Our cortex has the same structure and therefore uses the same principles of processing in occipital lobe for visual processing and in prefrontal part responsible for higher cognitive functions such as decision making. There aren’t special places for symbolic processing or vice versa, it’s all the same.

To use the results of deep learning processing by symbolic AI, you need to reduce it to a symbol, losing dimensionality of the representation, which is crucial for human-like reasoning. On the other hand, if you’re smart or lucky enough to invent a data structure for a symbol which preserves original dimensionality, why would you need to operate on something else?

So, hybrid AI is wrong, but why it’s evil? Because it’s capable of creating a nearly infinite number of combinations of architectures and an illusion of progress at the same time, which can make busy the AI community for many years without any fundamental progress, occasionally spawning impressive but far from human-like examples of intelligence.

Instead, the very fact of absence of a holistic approach which covers the same spectrum of tasks as the human brain has to be recognized as a fundamental problem for the domain and treated like, for example, the problem of quantum gravity in physics. Otherwise, 40 years of wandering in the desert look like an optimistic scenario.