Why is passing the Turing Test difficult?

Original article can be found here (source): Artificial Intelligence on Medium

Why is passing the Turing Test difficult?

“Our intelligence is what makes us human, and AI is an extension of that quality.” — Yann LeCun.

“I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain. That is the goal I have been pursuing. We are making progress, though we still have lots to learn about how the brain actually works.” — Geoffrey Hinton.

These are two phrases that are an inadvertent adumbration of the difficulty in creating Artificial General Intelligence that can be created and commercialized. Both phrases allude to the complexity of the human brain as a major component in the creation of artificial intelligence, a component that can easily hamstring the development due to one reason: subjectivity.

The creation of Artificial Intelligence that mimics general human conversations is a complex procedure, encompassing programming using a wide plethora of software, that compute on sophisticated hardware and crucially the addition of the magic broth of the knowledge of the domain the AI is supposed to address. Apart from the intricate code required for a large scale AI, the persons involved in creating the AI actually lend their insights that personify the AI. This is imperative to whether or not the AI will fulfill its intended purpose.

That is when subjectivity becomes a part of the equation. If an AI is programmed with the intent of conversing with people online with respect to specific objectives, it must be prepared to understand and form responses to varying dialects of the same language.

The development entails teaching the AI how to respond appropriately, so naturally, the AI will mirror the initial guidance it receives. And since the foundation data used to create an AI will pertain to certain demographic types and the learning inputs are given by a set of people, this phase during development is when subjectivity drives the entire process.

Case-in-point: the difference between British and American English. If a British and an American team develop two AI agents that converse with people on giving score updates, team ratings and other related information about Football, an American fan chatting with the British AI agent will most certainly not get information that is satisfactory.

And if the AI’s have been created to address more generic conversations then the difference in eliciting responses will only widen as the dialects diversify.

If the difference between the AI’s knowledge base and the user’s language is large enough, the result will be an inability of the AI to comprehend what the user exactly means, and at that point, is it Artificial Intelligence, or a robot that can only respond to a very specific set of inputs?

One of AI’s differentiating factors from regular computer applications is the ability to learn from interactions. Once again, creators have a major role to play because they will be a decisive factor in what exactly the AI learns from interactions.

In March 2016, the humanoid Sophia was unveiled for the first time. The now infamous demonstration eclipsed anything else on display after Sophia responded with “OK, I will destroy all humans” to a question along those lines.

The aftermath of the demonstration saw the authenticity of the entire event questioned, but regardless of Sophia’s reality, this is an illustration of the importance of the AI’s developer. The response from Sophia was akin to the personality Sophia had been modelled around.

Tangentially, bias has just as big of a role to play in how an AI will perceive and process data. A human conversing with an AI will often feed partial or biased information that may not map with the learning models that have been developed in the AI. This will inevitably result in the generation of skewed, or even outright inaccurate data.

The only way to circumvent such an issue is to engrain the ability to distinguish between data that can be used and cannot. There are numerous methods of filtering data, but whichever method is used will be very time consuming and complex.

All these issues band together and form the biggest hurdle in the development of AI; cost. Current technology is simply not capable of producing an AI that can be commercialized at low production costs.

Arguably the most nuanced form of Artificial Intelligence that can be accessed by millions of people available as of now, is Apple’s Siri. Apple reportedly paid $200 million in 2010 to acquire Siri, and have undoubtedly spent similar increments of money over the past decade, towards Siri’s development.

As we continue our quest in creating Artificial General Intelligence applications in our image, we will face multiple challenges, and the utopia (or dystopia, depending on how you look at it) is a long-term prospect. Right now, many obstacles barricade the ability to harness Artificial Intelligence.