Replicating human mind. Where are we lacking?

Original article was published on Artificial Intelligence on Medium

Replicating human mind. Where are we lacking?

There have been many significant advancements in the field of AI over the past decade. Deep Mind’s AlphaGo beating the human champion, Tesla’s autonomous cars, Watson IBM’s Jeopardy triumph, and many more. But does that mean we are close to making a system behave like humans?

No, we are nowhere close. Each step we take towards understanding the human brain, the further away it gets. In this article, I would like to highlight three characteristics of the mind that make it so intricate for a system to replicate.


Language has always played an important role in defining our existence.

Language gives us the power to make a description.

Descriptions enable us to tell a story

And, telling a story and understanding a story is what our life is all about.

Now, you might be wondering that AI systems have evolved to be able to describe objects and to figure out the cause and effect in a story. So, where do they fall behind?

Well, language has the power to marshal the resources of our perpetual systems and even command them to imagine things we have never seen. Our imagination can be strong enough to make that story feel like a movie playing on the back of our heads. To be able to make a system imagine a scenario with just words is still a far-fetched idea.


I think everyone remembers when IBM’s Deep Blue supercomputer defeated Kasparov at chess and now the AlphaGo beating Lee Sedol at Go. But, does that make these supercomputers intelligent? I don’t think so, all they did was substitute raw power for sophistication.

Intelligence in humans can be defined by having a great deal of knowledge about a subject, being able to recognize patterns and understand the opponent. This, then allows a person to analyze a situation, formulate a strategy, and tactically make a move.

Most of the algorithms available today, make use of powerful computation to give us a feeling that a system is smart/intelligent. To make our systems even remotely human-like, we must give up those calculations and be able to relate the situation at hand to our experiences and act accordingly.

The nature of the mind

The nature of the mind is limitless. It can imagine wonderful things but does not leave behind the sinful thoughts. It can make us feel devastated over our biggest achievement and make us smile on our silly mistakes. It possesses the power to make us fixate on one thing so much so that we give up everything to get it. Well, all these inadequacies are what ultimately makes us human. With artificial intelligence, all we are trying to achieve is the perfect replicate of an imperfect being. See the irony?

I am not trying to say we cannot achieve such systems, what I am trying to say is instead of making the algorithms rely on powerful calculations we should make them rely more on logic. Make the systems understand patterns and infer from experiences to decide on action rather than traversing the entire search tree to find an optimal answer. It will make mistakes eventually but unlike humans it won’t repeat it.