“To Artificially Learn Is To Artificially Live”​ — On Examining The Human Intelligence For…

Original article can be found here (source): Artificial Intelligence on Medium

Prologue

The human intelligence has numerous interesting aspects addressing, both, the immediate human needs and the overall existential requirements.

In this article, such aspects of the human intelligence are examined, to draw parallels with the emerging Artificial Intelligence (or AI) in order to shape its growth and trajectory.

Section 1: To Learn Then, Eh?!

From the perspective of the human intelligence, immersed in the ambient environment and gathering data through the human senses, to live a life involves solving a variety of learning problems.

Then, conceptually, any learning problem comprises of the following five parts:

  1. The Input-Output (or I).
  2. The Spatio-Temporal Frame Of Reference (or F).
  3. The Computation Model (or C).
  4. The Expendable Energy (or E).
  5. The Objective (or O).

These parts, in comparison with the emerging AI, are described below.

The Input-Output (or I):

The input as measured through the available sensor-modalities (e.g., GPS, LIDAR, etc.) over a given domain (e.g., audio, video, etc.) in a given form (e.g., encoded, quantized, etc.), defines the observed real-world phenomenon.

Then, the state-of-the-art sensor-modalities have surpassed the human senses in terms of the data-fidelity gathered on the aforementioned phenomenon (e.g. the data-fidelity of the visible-spectrum data collected by the modern camera sensors has surpassed that of by the human eyes in terms of color, lighting, stereoscopic depth perception, and more).

Separately, the output of a learning problem could be the set of countably infinite predictions, subject to the expendable energy. However, it is not necessary to be the case always; some outputs would instead be the different learned representations for subsequent reference.

Section 2: What’s Sandboxing The Learning?

The Spatio-Temporal Frame Of Reference (or F):

The frame of reference defines the underlying physics. For example, in the proximate-spacetime setup, space approximately obeys the Euclidean geometry principles.

However, as the spacetime span increases, space, by acquiring the spatial non-linearities such as the curvatures, goes beyond the Euclidean geometry principles, into the non-Euclidean realm.

The direction of time, in the proximate and remote spacetime setup, is considered to be linear as it defines the change in the amount of universally available entropy — from higher to lower, as governed by the standard model of physics.

While the proximate-space approximately obeys the Euclidean geometry principles, it does so, if the integrative-spatial data is to be considered, by incurring the cumulative errors in the process.

Since most applications operating under the Euclidean geometry principles, are purposely structured to define boundaries, the aforementioned cumulative errors tend to present a distorted view of the observed world. For the most part, these errors seem to be small enough by the target applications’ tolerance standards and are ignored.

Section 3: Isn’t Learning Just Cognitive Computing?

The Computation Model (or C):

Modern computing is based on the paradigm of the Turing machine coupled with random access memory. Essentially, this form of computing defines the act of computing to be discrete in nature, operating over piecewise quantized units of input data and producing the output in the same manner.

Then, the state-space of such a discrete machine could be large (e.g. countably infinite) based on the nature of computation performed. Therein, the constraints defined by the applications’ requirements, operating over the aforementioned input and the spatio-temporal frame of reference, ensure that its output is bounded.

However, the Turing machine based computation model focuses on the discrete computable functions only. Then, to extend the computation models, other forms may be considered that take into account continuous computable functions along with the pertinently-mapped learning models, among others approaches.

Notably, due to the computability-learnability equivalence principle, the state-of-the-art learning models have the same expressive power as that of the Turing machines.

Section 4: How Much Can Be Learned?

The Expendable Energy (or E):

Outside of the aforementioned parts that define any learning problem, is a much more fundamentally constraining requirement. This requirement is based on the energy usage. In particular, the amount of energy expended, in Joules, to acquire the input, establish and operate within a spatio-temporal frame of reference, define and operate over a computation model, and produce an output.

However, this is not to suggest that the alternate forms of the aforementioned parts can’t be defined, rather it may be the case that the alternate energy-efficient but varying-fidelity methods for them may have not been discovered yet. Thus, the true nature of the learning problem manifests.

Then, by the computability-learnability equivalence principle, there are hard bounds on how much can be learned in this universe.

Section 5: What Is The Direction Of Learning?

The Objective (or O):

Any learning problem belonging to a learning framework (e.g. deep learning) has a well-defined objective (e.g. classify the set of input images into the cat and dog images) and goal metrics (e.g. the F1-score defines a learning model’s statistical test accuracy) to assess the fitness of a solution to the aforementioned objective. Then, such an objective defines the direction along which the learning problem and its solution proceeds.

While constraining the computable functions to be discrete, the learning problems implicitly also constrain the input and output to be discrete. Then, it could be observed that by defining the computable functions to be continuous, a variety of new applications may be constructed.

Section 6: The Learning Model

As a result, based on the aforementioned parts, any learning problem L is mathematically defined by the five-tuple (I, F, C, E, O):

I : The Input-Output.

F : The Spatio-Temporal Frame Of Reference.

C : The Computation Model.

E : The Expendable Energy.

O : The Objective.

Then, to learn is to perform operations over the following map:

O : I[i] × F × C × EI[o], subject to the constraints defined in the objective O, where I[i] and I[o] are input and output constrained Input-Ouput I, respectively.

For reference, the human intelligence doesn’t necessarily rely on the high-fidelity input. It is not clear whether the human intelligence necessarily operates within the Euclidean-space and directional-time frame of reference. It is not evident that the Turing machine is the defining computation model for the human intelligence. Finally, it is not clear if the human intelligence computes discretely at all. Then, the aforementioned mathematical definition of the learning problem can be used to define, both, the human intelligence and AI.

Separately, comparatively to the human senses, advancing the sensor-modalities to acquire the increasingly high-fidelity data is a resource-expensive pursuit with a rapidly-diminishing return (e.g. should the self-driving car research necessarily focus on further improving the onboard sensors’ capabilities instead of the driving learnability?).

Section 7: The Cumulative Errors

As a matter of separate investigation, the target applications’ adjustments for the spatio-temporal frame of reference based cumulative errors should be considered to enhance the applications’ usefulness (e.g. the GPS satellites providing the positioning and other pertinent information based on the Euclidean nature of the space could consider the space to be non-Euclidean).

Alternatively, to minimize such cumulative errors due to the spatio-temporal frame of reference, it may be useful to focus on understanding the way human intelligence functions — to define a generalized universal learning model that not only learns discrete and continuous computable functions but also has the ability to introspect, select, pivot, among other operations, based on a family of likely competing objectives operating over the limited-fidelity input, the cumulative error inclusive quantum frame of reference, and the constrained energy budget.

Section 8: The Learnability Gap

On closely observing the worldwide development of the state-of-art AI, it comes through that it is an exercise in building the human need focused ensembles of computable learning models.

However, at the surface-level, it is not evident if such an exercise leads to a form of intelligence that compares with how the human intelligence works.

For example, in the general sense, it is not discernible in the state-of-the-art AI research how, based on an abstract or concrete objective and subject to the expendable energy, an AI may, without any human intervention, introspect to assess that there is a learning gap, design an experiment to collect the pertinent data, extract learnable artifacts from the findings, and add to the growing ensembles of learning models, to bridge that gap. In comparison, such endeavors are routinely performed by the human intelligence.

Then, it remains to determined whether the human intelligence is just an ensemble of computable learning models.

Section 9: There Is No Spoon

A unique feature of the human intelligence is to transfer-learn over the different senses. To further elucidate this notion, consider the following thought experiment.

Imagine you are looking at a spoon. Then, you close your eyes, and from a set of objects, are asked to recognize the spoon through touch. Assuming there is only one spoon in the set, you would correctly identify it among the objects.

Then, in a supervised learning manner, due to the associative training of mapping the spoon’s image with its physical form characteristics, you are now able to recognize the spoon using only its form based touch input instead of its image based visual input.

In comparison, this sort of transfer-learning over the different sensor-modalities (e.g. a point cloud generated by the LIDAR sensor is recognized by the neural networks through the capacitive multi-touch sensor) is currently not actively pursued by the state-of-the-art AI research, with a few exceptions.

Exception: Viktor Tóth’s work on the visual-to-auditory transfer-learning for blind-person navigation at the Feinstein Institute.

Take Home Thought Exercises:

Discover The Concept: Synesthesia.

Section 10: You Broke My Wine Glass

To appreciate the diversity and range of the human intelligence, consider the following allegory.

John, an enterprising physicist, is on the verge of a breakthrough that allows sidestepping the limit on the speed of light that the manifesting universe imposes.

Naturally, among numerous philosophical implications that changes the human understanding of the universe itself, the more pragmatic applications involve inventing the interstellar space travel using this new physics. Then, Pierre, the lab director, asks John to host an open house to invite the wealthy angel investors.

John, being a purist, not so secretly detests the commercial applications of his research. He would rather let it blossom naturally without the aberrant tainted requirements that the external funding sources would bring. On several occasions, he has had difference of opinions on this topic with Pierre.

Then, one morning, when John was particularly vacillating towards rejecting the external funding sources in the upcoming open house, he shared his reservations with his graduate student Gordon, who has made several pivotal contributions to a key piece in the new physics that addresses the infinite energy requirement for the matter transport at the faster than light speeds.

Inexplicably, Pierre got the wind of John’s latest round of ambivalence on the topic at hand. Frustrated with John’s worldview, Pierre stormed into John’s lab in order to confront him and to give him a piece of his mind.

At more than 90 decibels, Pierre blurted “John, I’m sick and tired of your inanities.”

Gordon, working in the lab at this point, lowered his head to make himself invisible to the ensuing argument.

“Pierre, why are you so angry?” quipped John.

“I’m angry at your naive worldview,” retorted Pierre.

“You think resources for this lab just magically manifest as easily as the physics comes to you,” Pierre visibly shaking with anger at that point.

“No.” John replied nonchalantly.

“But ceding control to external funding sources equates to letting those forces dictate what happens to this science,” John warned.

“Let me worry about that!” Pierre shouted and took a solitary wine glass that was in his proximity and smashed it to the ground.

John, amused by this turn of events, called Gordon out “Gordon, could you please ask the janitor to standby for cleanup and bring me six new wine glasses from my office, will you?”

With his PhD defense scheduled in a couple of weeks, Gordon was ill-at-ease to reject the extra-scientific duties he had been lately asked to perform. He returned with a tray of six empty wine glasses, adding to Pierre’s perplexity.

“What’s this for?” demanded Pierre.

Then, in a swift action John smashed, one wine glass after another, to the ground, and a brief silence ensued.

“Why did you do that?” Pierre asked authoritatively.

John, anticipating the question, calmly replied “I thought we were making points by breaking wine glasses instead.”

“What?!” Pierre responded with a confounding look.

Then, John said “Instead of transparently addressing my concerns about the external funding sources with facts, you swept it under the rug — proverbially speaking, and on top of that you broke my wine glass. I thought breaking the glass was the point so I broke six and my concerns still stand.”

Pierre saw the error in his ways and apologized to his most accomplished subordinate.

Then, the janitor came in to cleanup the mess, and Gordon went back to finding the answers to the following elusive questions:

  1. If matter is omnipresent why does he necessarily needs to transport it?
  2. Could he instead transmit the requisite properties of the select matter?

Post reading the allegory, if a question is posed about John’s true identity would a human adjudicator be able to determine if it is indeed an AI. Therein, it is evident that the state-of-the-art AI cannot bring to bear the diversity and range of cognitive capabilities John has demonstrated in the aforementioned allegory. Then, the human adjudicator would classify John as a human.

Furthermore, it is to be determined whether the AI could reason on par with Gordon’s reasoning abilities on a topical subject. Separately, given the limited reasoning demonstrated by Pierre in the aforementioned allegory, if the AI were to successfully simulate the human intelligence, Pierre’s would be one of the easier challenges to surmount, however, it would still not approach the diversity and range of the entire human intelligence spectrum.

Section 11: The Turing Test

The Turing test is the key hurdle the emergent AI needs to clear in order to demonstrate its indistinguishability from the human intelligence.

In the standard interpretation, the test manifests as a three-party anonymous game between an AI, a human test subject, and a human adjudicator wherein the human adjudicator asks a set of common questions to the communication-isolated AI and the human test subject, respectively, and records their individual responses to make the aforementioned determination.

Therein, the key element to the aforementioned game is the set of common questions asked to, both, the human test subject and AI.

For, if the individual responses to the aforementioned set of questions by the human test subject and AI, respectively, are such that the overall probability of which response came from whom is 0.5, then the overall likelihood of such events would be no better than tossing a fair coin.

Then, the AI would have assumed indistinguishability vis-a-vis the human intelligence, on the requisite set of questions.

Separately, by the computability-learnability equivalence principle, it is evident that the state-of-art AI has the same learnability as that of the Turing machine’s computability.

In particular, the AI can only learn those computable functions that the Turing machines can compute. Then, the AI is an ensemble of learnable computable functions, which can answer the questions, posed during a Turing test, that are computable in their nature.

For example, if the question posed by the human adjudicator is: What’s 2 + 3?

Then, both, the AI and the human test subject would respond with the answer: 5

Such a response to the aforementioned question would provide an illusory view based on the aforementioned probabilistic argument that the AI is indistinguishable from the human intelligence — on that question.

However, if the human adjudicator posed a question to the human test subject and AI, respectively, to discern the semantics of a subject at hand, then it is likely that the individual responses would be divergent.

For example, if the human adjudicator presents a piece of art to the human test subject and AI, respectively, and asks to comment about its meaning, then, it is likely that the responses would be different.

Then, the overall probability of which response came from whom, would be closer to 0 or 1, depending on the nature of the stipulated probability.

Thus, it is valuable to note the importance of the set of questions posed to the human test subject and AI. For, on one side of the response spectrum, it would appear that the human intelligence and AI are indistinguishable while on the other side, it would not be the case.

Then, the variable growth for the state-of-the-art AI lies in achieving indistinguishability on the entire response spectrum. In particular, towards maintaining indistinguishability on the semantics fidelity vis-a-vis the human intelligence.

Section 12: It’s An Apple Isn’t It?

To understand the interpretative semantic difference of the observable reality between the human intelligence and AI, consider the following allegory.

There was a struggling artist who liked to paint. Her art was sublime, however, due to lack of fame she couldn’t get the opportunity to display her work at the local art gallery. Then, one day, when the art director was feeling particularly generous, he decided to have an exhibition of the unknown artists, in the hopes to discover the next great one.

The next evening, the struggling artist got an invite from the gallery, for the upcoming exhibition. Feeling that her moment had finally arrived, she decided to paint her best art.

The next weekend, with great pomp, the exhibition began; the riche and bourgeois equally took part. There, in one corner, the struggling artist was standing, with her painting — a solitary apple, painted in oil.

Patrons visiting the art gallery appreciated the numerous complex art pieces and offered their interpretation as to what they thought those art pieces meant. Comparatively, the struggling artist’s piece had little room for ambiguity.

Each patron stopping by the struggling artist’s display, was counterintuitively perplexed by the simplicity of it. Then, without helping it, many patrons offered elaborate explanations as to what it might represent. Some even criticized the plebeian nature of it, still others offered tips on using better canvas or colors.

Towards the end of art exhibition, the art director had allocated time for each artist to come up on the stage and explain what their art meant.

Then, the struggling artist, upon taking the stage, said: “This evening was refreshing. I learned more about the world today than the world learned about my art. I painted an apple, for its simplicity and unambiguity, but also for its ability to distort the patrons’ perspectives, particularly, amidst the more complex artistic themes.

Notably, my receiving audience response was varied; some even offered me tips on how to paint. However, everyone walked away with a convergent observation that they saw an apple and although, our interpretations on what it potentially represent could diverge in infinite ways, it meant only one thing.

The audience having understood that the semantic divergence in their individual observations was the actual art and not the painting itself, collectively stood up to offer a rousing standing ovation. Of course, from then on, fame and wealth followed the struggling artist; the art director had discovered the next great one.

It is evident in the aforementioned allegory that the subject under consideration — the painting of apple, caused the collective patrons’ intelligence to have divergent interpretations. On closer examination, it stands to reason that the ambient environment of the art gallery had a key role to play in enabling such varying interpretations. Thus, in the given context, even though the subject had little room for the semantic ambiguity, the collective human intelligence of the participating patrons, in order to provide perceived value, resorted to divergent interpretations.

Comparatively, it is unclear if the state-of-the-art AI would have divergent interpretation of the subject under consideration. Furthermore, it is not evident whether the AI based on the different approaches would meaningfully diverge on the aforementioned subject. Then, an area of growth for the AI lies in providing the divergent interpretations for the fixed subject under consideration. For, such a capability would help bridge the intelligence indistinguishability gap between the human intelligence and AI.

Section 13: The Cat Is Alive And Dead

In classical computing, the unrestricted grammars are the deterministic generators that automatically provide and concisely represent the set of strings operating under the set of production rules.

Notably, such strings could be the juxtaposition of characters, musical notes, pixels, among other encodings. Then, by the computability theory, the languages of such grammars — also known as the recursively enumerable languages, are the only languages, which subsume the other Chomsky hierarchy languages, decided by the Turing machines operating under the classical constraints.

In comparison, in the quantum computing realm, the grammars and languages, recognized and decided, respectively, by the quantum Turing machines (or the equivalent quantum circuits), must incorporate the notion of superposition states, introduced from the field of quantum mechanics.

Then, the quantum unrestricted grammar must have the production rules that probabilistically assign different outcomes subsuming the number of requisite superposition states.

Consequently, the quantum languages of such grammars are the set of probabilistic strings. The quantum Turing machines (or the equivalent quantum circuits) recognizing and deciding the aforementioned grammars and languages, respectively, must have the probabilistic transition functions.

Subsequently, by the computability-learnability equivalence principle, such quantum Turing machines (or the equivalent quantum circuits) equivalent neural networks — also known as the quantum neural networks, must accept only the aforementioned quantum languages, in a probabilistic manner.

Then, it is possible that the aforementioned quantum neural networks simultaneously classifies a string of pixels generated by the aforementioned quantum grammar as an alive and a dead cat, as the probabilistic production rules could encode both images as one superpositioned-image (e.g. a hologram). It is only upon observing the image its state would be known.

This situation may lead to the quantum classification (n-ary)-lemma, for n > 1. For example, at n = 2 it would lead to the quantum classification dilemma.

Section 14: The Learnability Fidelity

Henry David Thoreau once pointedly said “It’s not what you look at that matters, it’s what you see.” This advice is pertinent to determining whether all learnabilities are equal.

For, if the truth, in a given setting, is only what the eyes can see, ears can hear, hands can touch, nose can smell, and tongue can taste, then it is standing on a very fragile ground. Thus, the discernment of the truth, leading to the high-value decisions, is beyond the human senses. It is in the realm of the discerning human intelligence.

More importantly, if learnability varies over the spectrum of data fidelity, then attention and efforts must be directed towards making it as high-fidelity as possible. Thus, it starts with ensuring that the target application has the right amount of high-fidelity data. However, for making decisions at par with the human intelligence, it is becoming increasingly transparent that only having the high-fidelity data may not be sufficient to build the high-fidelity AI.

If the state-of-the-art AI is compared with the human intelligence, then it naturally manifests that the human intelligence uses reasonable amount of the modest-fidelity human senses based data to make high-value decisions. In comparison to the human senses, the industrial-scale sensor-modalities advancements have exceeded their data-acquisition fidelity. However, such advancements have not brought the decision performance of the state-of-art AI at the same level as that of the human intelligence.

Therefore, it is unclear if making perpetual advances only to the high-fidelity sensor-modalities and data would be sufficient to build the high-fidelity AI.

In particular, given the decision performance of the state-of-the-art AI, by virtue of assigning prediction probabilities to each decision, it can be inferred that while the AI exceedingly excels at discovering the discernible function embedded in the provided high-fidelity data, it is not at par with the human intelligence in terms of understanding the contextual semantics of the data, in the anticipatorial, automatic, and introspective manner.

To be precise, contextual semantics refers to the underlying meaning of the data beyond its functionality and structure to include its existential and temporal nature in the specific context. For example, by looking at a picture, due to the geometry of the pixel data, the state-of-the-art AI may superficially conclude that it is a picture of a spoon. At present, only the human intelligence would suggest that it is not a spoon, by observing from the context of the picture that it was taken at a museum, it is the projected reflection of the art on the wall, and that by turning off the lights the spoon would cease to exist after the museum is closed for the day.

Then, such a decision made by the human intelligence, not only includes the data of the subject under consideration, but also the perfunctory contextual data in which it is set. In comparison, the state-of-art AI does not renders decisions in the aforementioned manner. Then, the growth for the AI lies in its ability to discern the contextual semantics in the underlying data in order to make high-value decisions.

Section 15: The Learnability Mode

An important learning based feature of the human intelligence, along with the senses, is attentionability — the ability of the human intelligence to constrain its learnability by controlling the senses through a variety of objectives.

In particular, with the help of well-directed senses, the human intelligence utilizes the attentionability in the spatio-temporal manner as follows.

In the spatial-attentionability mode, the human intelligence learns about the ambient environment by spreading-out the attention over a wide space using short time. Learnable artifacts gained through this approach are breadth based in nature. That is, the human intelligence learns a little bit about a number of subjects.

In temporal-attentionability mode, the human intelligence learns about the ambient environment by focusing the attention over a narrow space using long time. Learnable artifacts gained using this approach are depth based in nature. That is, the human intelligence learns a lot more about one or a few subjects.

Then, by carefully interleaving the spatial and temporal attention based learning mechanisms, the human intelligence learns the ambient environment, subject to the limitations of senses, the spatio-temporal frame of reference, and the expendable energy.

Section 16: There Is More To The Learnability Mode

As noted above, the human intelligence learns using the two modes of attentionability. However, the degree of learnability varies across them.

The human intelligence operates in the spatial-attentionability mode when the decisions are to be made over a short time. For example, surveying a room to locate a suitable place to sit. Notably, after one pass, there is plenty of data to gain from the ambient environment.

Alternatively, the human intelligence operates in the temporal-attentionability mode when the decisions are to be made over a long time. For example, learning to play a musical instrument.

Consequently, in this mode, as the time progresses and due to the limited ambient data to gather, the useful data gained dramatically diminishes, as the amount of data gleaned over few subjects per unit time (i.e. the data glean rate) is more compared to that in the spatial-attentionability mode.

Then, the human intelligence, with its aforementioned spatio-temporal mode of operability, resembles the Etch-A-Sketch toy, which has a stylus that is precisely controlled by the two knobs that moves it in the horizontal and vertical manner, to draw complex ephemeral lineographic images on its canvas.

Thus, it is not evident that the state-of-art AI functions in the aforementioned manner, particularly, being introspective about when to use which mode based on a variety of higher-order, potentially abstract, cognitive constructs based objectives.

Section 17: The Semantics-Generating Bio-Mechanical Machines

It is unclear whether the state-of-the-art AI has surpassed the human intelligence in automatically, energy-efficiently, and introspectively discovering and assigning purposeful semantics to the available, human-centric or not, activities and objects in the observable universe.

Then, the humans are incredible semantics-generating bio-mechanical machines. But it also means that their semantics are not scalable at the universe-level with several participating sufficiently advanced alien civilizations.

For example, the alien species with bio-levitation capabilities would neither need a sitting instrument such as a chair nor would have a word for it in their vocabulary.

This indicates that the human semantics-generation is very human-centric, and as a result not scalable. This limitation particularly applies to the cognitive functions allocated towards developing the human spoken and written languages.

Section 18: The Present And Emergent Semantics Void

In comparison to humans, the state-of-the-art AI does not understand the semantics of the data it is learning on, instead it focuses on the structure of the data only.

Notably, in the general sense, the semantics of the data transcends its structure; they may converge but are not equivalent concepts.

Then, the emerging problems in the AI-applicability such the data biases, ethics, and more, need to be attributed to the aforementioned operating characteristics of the AI.

For example, if biased demographics based crime data is provided to such an AI, it would generate predictions in line with the structure learned from the input data without discerning the underlying contributing socio-economic markers of the crimes.

Thus, such predictions would be considered biased. Consequently, such an AI would be considered unethical, although, technically, it isn’t intentionally unethical, as it lacks self-awareness, rather it is ineffective.

Therefore, in the general sense, such an AI cannot perform well at the semantics-discernment tasks such as detecting the fake news, among other tasks.

Beyond the aforementioned present-semantics issue in AI, it is unclear if the state-of-the-art AI understands the gestalt — the emergent-semantics of the collection of, human-centric or not, activities and objects. To further elaborate this notion, consider the following thought experiment.

Take two pencil erasers and a rubber-band, and orient them on the table to look like a smiley-face. To a human, it would be immediately recognizable as a smiley-face, but to an AI they are two separate pencil erasers and a rubber-band, as it cannot comprehend the emergent-semantics in an assembly-permutation of the aforementioned objects.

Perhaps, in the aforementioned particular instance of the assembly-permutation, through the application of supervised learning, the emergent-semantics could be provided. However, it is unclear if the AI can simulatably-predict the emergent-semantics of the unforeseen assembly-permutations of the previously known or unknown, human-centric or not, activities and objects.

Moreover, such an unforeseen assembly-permutation could be an artifact in the fields of arts or sciences, the emergent-semantics of which would not be comprehended, much less be attributable to either fields, by the state-of-the-art AI.

Furthermore, to a human, the aforementioned smiley-face would evoke the feeling of happiness. It is unclear what is evoked within the AI by looking at the aforementioned individual objects.

Section 19: The Intelligence Spectrum

Observably, the human intelligence perceives, both, the abstract (i.e. the uncomputable) and the defined (i.e. the computable).

Then, it is more than an ensemble of computable learning models. For otherwise, it would not be able to appreciate abstract concepts such as altruism, kindness, and love.

The last one of the aforementioned concepts (i.e. Love) is not only an abstract concept but also an existential imperative for the humans, among other imperatives.

Consequently, if the development of the state-of-the-art AI is necessarily premised on growing only the ensembles of computable learning models then such an AI cannot appreciate the abstract concepts.

Section 20: The Reason To Love

Given sufficiently expendable energy, if the aforementioned AI lives on an enduring machine then one of the key reason for love to exist would be rendered moot.

That is, from the utilitarian perspective, if the humans, by transfer-modeling the emerging AI on the way their intelligence works, could live forever then there would be no need for love to ensure their species’ survival. Nevertheless, love would endure and flourish for other meaningful reasons.

Similarly, the notion of human gender would be moot as well; something that dramatically affects the current human gender based societal incentives and values.

Then, the human intelligence, whether to ensure the human species’ continued existence, complete its neurological function, among other purposes, continues to find meaning in the neuro-physical coupling through love, something the energy-constrained and expedient AI may find to be useless.

Take Home Thought Exercises:

  1. Watch The Music Video: All is full of love (1997), Björk, Link: https://www.youtube.com/watch?v=AjI2J2SQ528
  2. Watch The Movie: Equilibrium (2002), Link: https://www.youtube.com/watch?v=7wf8LEeoVS4
  3. Discover The Concept: The neuroscientific basis for love.

Section 21: The Spacetime Trade

The raison d’etre for the human gender based neuro-physical coupling is — socio-eco-bio-neuro-epistemologically speaking — men trade spacetime (i.e. the resources and the time it takes to acquire them) for spacetime (i.e. the progeny) with women, to ensure their species’ survival.

From the physics-centric perspective, men acquire space using time to give it to women for them to change it to spacetime using their own and men-provided spacetime. Notably, the variables in such a setup may help modulate human population size, among other objectives.

It is unclear why women would make such a trade if the overall survival didn’t subsume, including through progeny, their gender’s survivability as well.

To facilitate such a trade and to sweeten the deal, nature provides a neuro-bio-chem-physically binding overlay called Love. Although, there are other reasons to love as well.

Then, all human couple based proceedings are essentially spacetime trade negotiations including their discords, which are spacetime trade disputes of failing to match the expectations on the fairness of the said trade.

In comparison, for an expedient and genderless AI with sufficiently expendable energy, such a trade would be moot.

Section 22: The Impending Great Intelligence Merge

The humans have advanced through a series of revolutions; the last one being the industrial revolution while the currently underway one is based on the AI.

With each such revolution, the humans have leaped forward, benefiting and suffering tremendously with the numerous applications and issues that have manifested.

As the pace accelerates in the AI revolution towards an eventual convergence known as the Singularity, where the AI merges with the human intelligence to further augment the overall intelligence, a few questions need to be pondered:

  1. Would the merger of the human intelligence with AI give rise to a new species?
  2. What new applications and issues would manifest in the new converged-intelligence world?
  3. Should the humans, instead of the perpetual advancements, consider status-quo or even selective regression to a simpler life, where less is more?

Then, at this point in the human evolution, the problem of choice naturally manifests.

Take Home Thought Exercises:

  1. Watch The Movie: The Matrix Trilogy (1999–2003), Link: https://www.youtube.com/watch?v=82cWadZdAuo
  2. Watch The Movie: The Animatrix (1999–2003), Link: https://www.youtube.com/watch?v=QxyKC00BgcE

Section 23: The Delusions Of Grandeur

It is reasonable to note that the planet-scale, human-centric or not, activities represent a quantum stochastic system. Furthermore, it is conceivable that all objects, human-made or not, could be assigned a label.

Separately, it is entirely possible to collect sufficient pairs of data, both in number and variation, of the type (activity [geometry-structure, kinematics], label).

Then, the AI, constructed on the premise of learning on the aforementioned data, would present an illusory view that it can predict most activities. However, such an AI would suffer from the delusions of grandeur, which wouldn’t be its own and wouldn’t have any basis in the observable reality.

For, the collection of data on most, human-centric or not, activities, both in number and variation, does not equate to the true introspection. There would always be an unpredictable pair that isn’t collected.

Moreover, the data on the aforementioned quantum stochastic system is collected assuming its natural largely unperturbed state, where the system isn’t aware that it is being observed. It is unpredictable how the system would react, on being made aware that it is being observed.

Separately, by carefully designing structurally-isomorphic, human-centric or otherwise, activities and objects, which look the certain way but function in the unforeseen manner, a new state of the aforementioned system can be arrived at that the AI would not be able to predict.

Then, to learn the new state of the aforementioned system, the AI would need more energy, space, and time, without any guarantees that the system wouldn’t change its state again.

Currently, the state-of-the-art AI has a sizable carbon footprint. It is unclear where the resources would come from, to perpetually power it in order to learn the new states of the aforementioned system.

In essence, humans do not fully understand the nature of sentience yet, and therefore, cannot build its model in enabling AI become sentient. Then, in the absence of sentience, the progress in AI is subject to the human participation. Therein, it is unclear who will wield such a resource-hungry AI and be responsible for its untoward ramifications.

Furthermore, it is evident from the computability-learnability equivalence principle that if the classical AI were to be quantum enabled, it would still not be able to recognize and solve uncomputable problems such as the Busy Beaver function and the Halting problem. Then, even with the quantum enabled AI, only approximate solutions to such uncomputable problems would be available.

Then, subject to the sufficiently expendable energy, the aforementioned quantum stochastic system of the planet-scale, human-centric or not, activities could enter an uncomputable state that the AI would not be able to learn on, and therefore, would not be able to predict about.

The energy budget for the perpetually-human-intelligence-lagging AI would be astronomical, and would dwarf the most human needs, potentially leading to its eventual shutdown.

Section 24: Honey, That’s Why Only Nice Things Can’t Be Had!

Instructively, the rise and fall of the human civilization is deeply intertwined with the collective sensitivities of the human intelligence manifesting from their brains — also known as the neural networks.

At the dawn of a new civilization, there is a large amount of tolerance by such neural networks for the external perturbations.

As the civilization progresses, it begets, both, the material and spiritual fruits for its populace to enjoy. Therein, if care is not taken and growth is biased towards the material pursuits only, it leads to the imbalanced training of such neural networks.

Then, such a bias causes the aforementioned neural networks to distort the societal-input they receive to actuate in the manner that hastens the fall of the flourishing civilization. Appropriately, this effect is called the first world problems.

History is an excellent teacher, that is, if one pays attention. Then, to have a lasting and flourishing civilization, moderation is the existential imperative.

That is, balancing the material pursuits with the spiritual growth is the only way to longevity for the individual human intelligence and the collective flourishing civilization in order to meaningfully shape the growth and trajectory of the emergent AI.

Section 25: To Live As A Tamagotchi Physics Simulation

Historically and contemporarily, humans have been distinguished by the way their intelligence works, then it behooves to discover:

  1. The general transferable computing capabilities of the human intelligence.
  2. The specific variations of the cognitive-traits of the human intelligence.

Then, inferring from the insights drawn from the aforementioned discoveries — to be able to learn artificially is to live artificially as well. Perhaps, this is the way to immortality that humanity has always sought, but begs the following questions:

  1. What is the extent of change in the corporeal form (i.e. the human body) one is willing to consider in order to live forever (e.g. would one like to live forever as a physics simulation on an advanced machine)?
  2. What sort of pursuits such a life would have (i.e. having left the material world and corporeal form, what would one do when one is artificially alive)?
  3. Why would one consider becoming an advanced form of the Tamagotchi toy?
  4. Assuming such a corporeal-form-to-simulatable-entity transfer is possible, interstellar space-travel is feasible, and sufficient expendable energy is available, would the humans finally become the intergalactic species escaping the local material confines to live among the stars?

Epilogue

This article examines what it means to learn from the perspective of the human intelligence. Therein, for the purpose of helping shape the growth and trajectory of the AI, its numerous aspects are compared with the human intelligence.