Cybernetic Existentialism: can a machine imagine its end?

Source: Deep Learning on Medium


This article is the result of a long dialogue with a psychologist friend of mine, who — always with great acumen and critical spirit — highlighted some fundamental aspects of modern artificial intelligence (AI) comparing them with the hinges on which the substratum of life itself rests and that AI inevitably takes as its starting point. In particular he pointed out to me that while mankind is driven by a very strong instinct of conservation — not to be ascribed to the long list of more or less advanced drives, but rather to the phylogenetic basis of existence itself — , an intelligent machine, for as well designed, it would have no valid reasons to set the same goal.

My immediate reaction was to think of the inevitable failures that the electronic or mechanical components would have “suffered” over time and therefore, at first glance, I replied that the conservation of the species (understood as a group with similar characteristics) would still be necessary to avoid a progressive destruction of the member elements. However, reflecting on the problem more deeply, it seems obvious to me that the problem of failure and its resolution is far from being a necessary condition that can really allow us to speak of a “conservation instinct”. The reason is very simple and the explanation can only be inspired by human reality: if I fracture my arm, a long series of endogenous stimuli, among which pain certainly stands out, tells me that a noxious condition has occurred in my body and therefore I must immediately find a remedy to avoid an escalation of the danger. Any reasonable person would be “forced” by the status quo to go to the hospital to face the necessary care.

Photo by Franck V. on Unsplash

At this point, it seems clear to me that there is no reason why a machine cannot do the same. Indeed, nowadays it is rare to find electronic or mechanical systems that do not include a fault self-diagnosis scheme. Hence, it is not absolutely unrealistic to think of machines that are able to adopt behaviors based on adaptive controls that are, in turn, able to make the best choices based on a certain number of internal and environmental variables. In short, the problem of automatic diagnosis and repair of failures is normal routine in almost all fields of technological engineering, but nobody has the courage to say that a computer, when it signals an excessive temperature of the processor, is in some way showing off his uncontrollable desire to have offspring, if anything, cares (more or less intentionally) to safeguard the integrity of his vital structures. For human being such a situation is certainly different: he doesn’t conceive reproduction as a means of self-repairing (an absurd statement), but a necessary condition of existence that only in retrospect we can define in macroscopic terms. In fact, the concept of copulating with reproductive purposes is not inherent in the social policy of a community but inevitably disseminated in every member, almost as if it were an innate cultural background. Of course, in saying so, I don’t want the reader to think that I support the thesis of nativism too lightly. I am convinced that the knowledge to create a human being arises first of all from the more or less profound knowledge of the mechanics of reproduction and therefore, ultimately, it’s essential that each element of a group is primarily able to separate the compatible members from the others. Unless we take into consideration the paradoxical situation of general hermaphroditism, it seems obvious to me that the individual can only become aware if placed within an adequate context. From my point of view — that of a designer of intelligent machines — the need for the continuation of the species is certainly not a factor of primary importance, but it is nevertheless interesting, in the perspective of artificial consciousness, to analyze which requirements a machine should have in order to be able to openly manifest the desire for progeny.

First of all, as I have already said, I consider this tendency, albeit individual, as if it were an emerging property of a socially formed group. In other words, in my opinion, it is almost impossible to assess the degree of interest in the reproduction of an individual unless his existence is not properly contextualized. Although trivial, this thesis highlights the need to observe reality in a complex structure that includes the same observer as an integral part and therefore shifts the point of view from pure psychology to the more general sociology. If we give for granted that the species wishes to continue its existence, we need to drop the myth of a superman capable of perfectly representing the macrocosm where he is. Obviously, this does not mean that the individual acquires the ability to reproduce, but rather that this peculiarity is “awakened” by the continuous interaction between members of a community.

Only for these reasons, I treated the problem as if the active agent, either human being or machine, is such if and only if there are other homologs that are compatible with it and aware of their mutual existence. However, it should be pointed out that once this process has been carried out, the singular identity loses some of its constitutive value to guarantee the community that compactness necessary to avoid a progressive fragmentation. Also, for such a reason, it is much more convenient to describe the instinct of conservation as an emergent property of a system that drives its members to try to understand which local and global factors can really influence it.

A machine, taken individually, has an extremely limited existence: it can operate according to what is prescribed by the design algorithms, or it can evolve in a rather random way giving life to a temporal dynamics initially unknown and definable only in probabilistic terms. In any case, it could never cross the threshold that separates the individuality from the awareness of belonging to any context. An intelligent isolated system can, therefore, be potentially able to have consciousness, but, lacking the wide range of external stimuli characteristic of high-level animals, it “will live” its life with the inherent (and dramatically wrong) awareness of the uniqueness of the same.

Photo by Patricia Prudente on Unsplash

In other words, it will be an atom in a universe devoid of any force acting between these particles and therefore, from a purely existential standpoint, such an agent will have the full right to consider itself as the whole universe (or vice-versa). This choice, on its turn, will unconsciously limit any possibility of experiencing different and more extended realities. It is therefore absolutely impossible that the isolated machine can exhibit interest in reproduction for conservative purposes, but what happens (at least, theoretically) in a context where there are more intelligent agents? To answer this question we have to do a little virtual experiment: suppose we create a three-dimensional arena where some robots are positioned free to move and interact with each other. For example one of them could ask others where a certain object is located and receive a reply from the person who first located the target. The type of interaction does not matter, what really matters is that each individual robot is perceptively active and suitable for communication according to a predefined protocol. We also assume that each system has incorporated a control device that continuously monitors the robot’s “vital” functions and it can promptly trigger an alert when a component is close to breakdown. In this way we are starting from the assumption that the single agent is designed in such a way as to be aware of both its limitations and the damage that its structures can “suffer”, therefore we have unknowingly imposed the condition that every member of the small community has an existential consciousness that leads it to behave taking into account all the inherent limitations.

From a design point of view, it is also possible (and desirable) that a “sick” robot is able to take all necessary emergency measures. This kind of attitude confirms once again the intentionality of the agent’s behavior: it wants to continue its life and, to a certain extent, it is afraid of the termination of the same. Even if this may seem paradoxical, we must keep in mind that there is no metaphysical justification for the desire to live: every person tries to conserve himself and is afraid of death only for purely cultural reasons. It is not therefore absurd to imagine to program a robot so to create the desire of life, exactly as it’s normal to teach a child to avoid certain risks because they can cause severe injuries. A very important point is, instead, the awareness inherent the transition from a general state of life to one that is its logical opposite (i.e. death). The instinct of conservation takes shape from this factor and it evolves on the basis of considerations that can be placed in the social sphere, as the value assigned to the tasks carried out by each member, the affiliation-level that is derived from synergistic relationships or just a personal desire to preserve one’s existence according to both personal and social values.

Therefore, the pillar of all this discussion is hence the fundamental concept of unity: the impossibility of replacing oneself through cloning (this is indeed a very complex problem and I don’t want to discuss it in a deeper detail now). This one is the living energy that feeds the deepest instinct: safeguarding one’s life. However, as it is clear to anyone, this longing desire is constantly opposed by the conscious perception of the structural and functional limits of the substratum that sustains every conscious activity. The immediate consequence is the birth of a struggle between desire and awareness of inability. Thanks to rationality, every person realizes that a transition must take place sooner or later and that this moment will be unique, unrepeatable and above all irreversible. When this happens the dominance of reason (which becomes more social and less selfish) reveals its most formidable weapon against any form of limitation: reproduction.

Therefore, we can observe three distinct phases:

  1. The preservation of self
  2. The awareness of all physical and biological limits that prevent eternal life (I’m voluntarily not taking into account any potential future attempt to solve this problem)
  3. The solution of (2) through the procreation of new members

It’s crucially important not to underestimate the need for all three parts of the process because neither the recourse to emergentism would be explained. Another consideration is that the transition from the second to the third phase is feasible only within a community. Although it is undeniable that every conscious being must necessarily agree with the whole triad. Apparently this may seem like a contradiction, but if we analyze the demographic trends of a city and, at the same time, we catalog the personal ideas in terms of reproduction, we immediately find that, while the average population remains almost constant — in the face of normal fluctuations — , many people do not have the slightest desire to procreate or, at least, they do not plan this event as a primary and fundamental goal!

Let’s move now in the field of machines and resume our virtual experiment: as stated, the only way to verify the presence of a certain instinct of conservation is to evaluate the degree of awareness of each robot for the aforementioned triad. The first point is certainly guaranteed by automatic fault diagnosis systems and therefore we can be sure that the “robotic self” is constantly and sufficiently safeguarded. The second point is perhaps more critical, but even in this case, the problem can be circumvented by considering in the design a device for evaluating the quality of the components based on a statistical measure like MTBF (Medium Time Before Failure). This parameter is characteristic of every human artifact and allows to define an estimation of the average life of each component (and, of course, the whole object).

The problem might be simpler for human beings, as there are several national and international organizations that periodically calculate the value of human MTBF (life expectation). However, it is absolutely not true that at the age of 75 a man is about to die, but it is certainly true that, on average, in a population, the number of deaths of people whose age is in the range (70, 80) is larger than the one observed in the range (20, 30). By this, I mean that the second point of the triad is influenced both by endogenous factors (mainly the appearance of senile degenerative pathologies), but also by the cultural diffusion of emerging pieces of information which are difficult to obtain through local analyses. Once again emergentism seems to dominate and this could invalidate our statements about the machines; however, a substantial difference that exists between human beings and artificial systems is precisely related to the ability to self-assess the state of its components. A well-designed robot — possibly in a multi-modular way — could, in principle, control the total number of active units and compare it with the number of unusable components. On the basis of this observation, the machine is able to perform a sufficient number of estimations and reach an individual MTBF.

Photo by Edu Lauton on Unsplash

Having clarified this point we arrive at the most crucial question: the culmination of the triad, reproduction for conservative purposes. We have said that human instinct for procreation springs from factors generally linked to the individual and his reality. In a certain sense, we could add that the (innate) desire for indirect continuation is the final compromise of the triad and, hence, it can be assumed as the true existential force that animates the entire life of an organism. Does such a force exist In our arena full of living and interacting robots?

In order to try to answer, we must assume the position of the programmer who mentally simulates the behavior of artificial organisms. Let us suppose that robot 1 is engaged in a certain work and suddenly realizes that its servomechanisms that control locomotion are damaged. It is therefore forced to stop any activity and seek help. In the worst case, the mechanical damage could be caused by a short circuit in electronic systems which, in turn, might be irreparably damaged (e.g. completely burned). Suppose, however, that a small part of modules is still active and that such an internal condition is defined as “agony”. Can the machine really foreshadow this state? The transition from operation to final failure is necessarily binary, i.e. there will always be an instant before which the robot still works (even partially), and after that, it becomes unable to perform any task. The succession of internal states will, therefore, must necessarily finish and the transition from the last active state to the total absence of states (death) is formally indistinguishable from any other standard transition. In other words, the robot will never be aware of the “transition” and will always evaluate its condition, albeit desperate, as a general fault that must be fixed before being able to resume its duties. However, let’s suppose to “force” the knowledge of the robot by informing him that his problems have no solution and, at most, it will be able to give life to new organisms through some reproduction mechanism (the most trivial part starts from dismantling). Once again we need to face the problem of observing the configuration of internal states after this tragic communication: is there any particular sign that can inform us about the possible awareness acquired by the robot? The answer is very likely to be negative and the most obvious reason is that the system cannot imagine either in an analytical or figurative way a state whose main feature is the non-existence!

Therefore, we can deduce that the robot cannot inherently think about death and hence the triad cannot be completed. It doesn’t matter what value the robot attributes to itself and to its work, because, in any case, the crucial element is the relationship between being in a given space-time point and not being able to be in any other point. When this situation occurs we have the awareness of a continuum that must somehow break, but if this event is banned from the functional dynamic itself, then any prefiguration of a total absence of life becomes impossible. So, if we look for the roots of the instinct of conservation through the completion of the triad, it is more than evident that a machine will never be able (at least, until it becomes an alternative human being) to acquire an autonomous internal state that pushes it towards some reproduction process (given that it exists and it’s feasible) unless it is planned (in the most algorithmic and literal sense of the term). In this case, which apparently could reveal the emergent property of procreation, the machines would adopt a behavior very similar to that of a human community, but this would be nothing more than a pure illusion since there would be any reason to refer to instincts or impulses as every form of tacit awareness would inevitably fail.

Photo by Lukas on Unsplash

In conclusion I would like to remind the reader that my analysis is based essentially on the comparison between groups of human beings and groups of intelligent robots, however, I have not defined at any point in the text what I mean by intelligence applied to the machine. Even if this omission might be the cause of controversy and criticism, I would like to point out that the very concept of intelligence is definable only starting from the study of particular animals, above all human beings. This necessary step implicitly defines a context and, automatically, biases our interpretation of intelligent behaviors. Therefore, if we start from this assumption, the value to be attributed to the word intelligent robot is somewhat arbitrary, since it is limited by the consideration that the behavior under consideration (the instinct of conservation) is not a prerogative only of human beings, but it’s present in the majority of animals. Our virtual robots can be artificial structures capable of managing internal states and equipped with a bivalent perceptive system (able to capture information flows both from the external world — exteroceptive sensors — and from the internal circuits and logic modules — proprioceptive sensors), and, finally, a locomotion-interaction system that allows the robot to come into full contact with the pre-defined environment/context.


References

  • Schrodinger E., What is life?, Cambridge University Press, 2012
  • Heisenberg W., Physics and Philosophy: The Revolution in Modern Science, Harper Perennial Modern Classics, 2007
  • von Neumann J., The Computer and the Brain, Yale University Press, 2012