When machines talk to us



We are surrounded by chatbots but few of us know some of the great ideas on which they are built

“Can machines think?”

All those who studied the basics of computer science at school certainly came across the name of Alan Turing. Considered today one of the fathers of Artificial Intelligence, his life has inspired numerous books and films, especially for his contribution to the deciphering of the Enigma system, used by the Germans for coded communications during the World War II.

Less known to non-specialists is the article “Computing machinery and intelligence” which he published in the magazine Mind in 1950. In that paper Turing reformulated the idea of an Intelligent Machine by proposing a method to identify a machine capable of thinking. In a test that takes his name, Turing placed a human in front of two hidden interlocutors who would respond only through strings of text — today we would call them online-chat.

A machine is hiding behind one of the two responders, and the man undergoing the test has to understand from the answers he receives which was the human interlocutor. In modern terms he must distinguish the chatbot from the real person. If he is not able to do that we should say that we are in front of a thinking machine. Turing’s article laid the foundations of an information technology sector known today as Natural Language Processing.

A very simple chatbot (a rule-based method)

Let’s go down in more details, what is a chatbot? We can define it as an algorithm capable of conducting a conversation. In other words, it is able to interpret what we tell (a question) and to provide adequate answers. So, we can distinguish two moments, the listening phase (comprehension) and the response phase.

The simplest program we can develop to perform this task is based on the rules. If I ask this, you answer me in that way. If I say “Hello, how are you?”, you answer “Well thank you, and you?”. For those familiar with programming, the underlying idea is the if-then computer construct.

This is an approach that is still used in contexts where the possible requests are a limited number, and they correspond to unequivocal answers. In fact, it has the advantage of being precise and returning few errors. But it can hardly stand up in open contexts, where even a mistake in writing a word can create problems.

As can be seen, it has indeed many limits: it is time consuming, the rules must be entered manually and it uses large archives of records. Furthermore, it is clearly impossible to foresee every possible request. Of course, it’s really hard for it to pass the Turing test.

More complex models (a Machine Learning approach)

Already since the second half of the 1960s, more complex models have been designed based on the main idea that it was sufficient to be able to identify some keywords in the question and then generate a limited number of possible answers.

Then after the 1980s, the introduction of Machine Learning algorithms able to learn from experience, allowed to introduce more consistent systems that are based on statistics. Among these models, those that have been most successful use Embedding. They are based on the idea that the words of a sentence, but also the sentences themselves, can be represented by numerical values — or rather by groups of values mathematically expressed with vectors.

We can imagine Embeddings as elements in a space, If we can measure the distance that separates them we can identify relationships between words — or even between sentences. To give an example, the vector of values relative to the word “King” will be close to that of the word “Queen”.

For these concepts to be successfully applied, however, it was necessary to wait for the new millennium and the spread of social networks. In order to construct efficient Embeddings, capable of capturing the many nuances of language, it was in fact necessary the availability of large quantities of examples, provided today by online conversations. A chatbot made in this way responds to the question of an interlocutor, identifying among the possible answers the one with the closest Embeddings.

But as the linguists know well, human language is an ugly beast, and often it can be very complex to be able to decipher all its elements. Not to mention that a system based exclusively on Embeddings does not keep the memory of the conversation in progress, and will respond without taking into account the answers it has already given.

Deep Learning comes into play

Starting in 2010, the rapid and impressive development of deep neural networks has provided new powerful tools that have been added to the previous ones. Today, the Deep Learning algorithms are able to teach networks with different layers of connected units (artificial neurons) not only to interpret the questions but also to build on the moment, word by word, the answers.

Some particular models divide the problem into two parts: in the first, the encoding phase, they collect sequentially the components of the incoming question sentence and create an abstract representation of it — in a certain sense they grasp the meaning of the sentence; in the second, the decoding phase, they take the abstract representation and generate an output that composes the answer.

In these architectures, the different layers of neurons can actually capture different parts in which a discourse is composed, not only at the morphological and syntactic level but also at the semantic level. The most evolved ones maintain the memory of the conversation and are able to generate new answers, which they have never seen during the phase in which they are trained.

So, do you think machine can think?

What we have talked about so far is only a glimpse of the state of the art. Advanced examples of these algorithms have been developed in recent years by large high-tech companies and we are now surrounded by voice assistants based on them. We are talking about complex Deep Learning architectures with numerous layers, which use Embeddings and a mix of different Natural Language Processing solutions in pre-processing phase.

The most advanced development of these technologies was achieved last May. During the Mountain View developer conference, the Google’s voice assistant Duplex was asked to book an appointment at the hairdresser. With his artificial voice indistinguishable from a human voice, Duplex called and did it without the woman on the other end of the phone realizing that she was talking to a machine.

Returning to the Turing article on Mind, and to the idea of how to identify a sentient machine, it is perhaps the time to ask ourselves if we are in front of a thinking machine?

#NLP, #chatbot, #Embeddings, #AlanTuring, #Duplex

Also on https://www.linkedin.com/pulse/when-machines-talk-us-marco-fosci/

Source: Deep Learning on Medium