Ex Machina — The Fine Line Between Consciousness And Intelligence

In the Information Age, terms like Artificial Intelligence and Machine Learning are now part of everyday life. Today, in developed countries, AI is practically available to anyone, just think of the virtual assistants of our smartphones or home automation systems and smart cars, whom no longer represent exclusively elitist assets.

At the base of these technologies, there is a field of artificial intelligence called Machine Learning, that uses statistical techniques to give a machine the ability to progressively improve its software by autonomously learning new knowledge and skills, just as a child in the early years of age learn to walk and talk. It is no coincidence that one of these techniques, called Deep Learning, takes its cue from the structure and functioning of the human brain.
The Von Neumann Model itself, that in 1946 laid the foundations for the development of modern computers by defining the architecture of a programmable machine, does not differ much from an anthropomorphic model that sees a man performing operations on sheets of paper. The analogy is, in fact, clear: in Von Neumann Architecture, where a Central Processing Unit interprets and executes instructions saved in memory, it is possible to replace the human brain to the CPU and sheets of paper to the memories.

Although research in this field continues to get important results, there is a controversy regarding the development of these advanced systems, able to act and think rationally. In fact, if practical and utilitarian aspects are not discussed, differently it happens with ethical ones. In this connection, there are two schools of thought regarding the operation of an intelligent machine: on the one hand, according to the Weak AI hypothesis these technologies are the product of well-defined instructions that allow an intelligent while predetermined behavior, but, on the other hand, the Strong AI theory assume that the intelligent operation of a machine involves the being conscious of its actions.

At the basis of all this, if we can say that the human body is a biological medium controlled by the psyche, it is not wrong to say that the hardware of a machine is, in turn, an electronic medium controlled by the software. Accordingly, as long as the latter is governed by precise rules, agreeing with Weak AI, the distinction between man and machine is clear, but if contrariwise we came to such a point that Strong AI was no longer a supposition, would it be right to continue calling it machine?

Source: Deep Learning on Medium