Deep Artificial Cognition

Original article was published by Rob Smith on Artificial Intelligence on Medium


Deep Artificial Cognition

How Cognitive Artificial General Intelligence Systems and Humans Access Deep Cognition

by Rob Smith, eXacognition

This article is a followup to the extensive content I have published online about Artificial General Intelligence, Artificial General Cognition, Cognitive Artificial General Intelligence and AI Development. Most of this content is included in The Artificial Superintelligence Handbook Series (vol. 1 & 2) on Amazon or in the upcoming ASIHv3 for release later this year.

We have all heard of machine learning (finding pathways in data to make decisions) and most of us understand the extension of this discipline into deep learning (using those pathways to self improve decision making). What might be less known is that similar concepts are being designed and implemented in advanced AI labs around the world to move today’s narrow AI toward Artificial General Intelligence (AGI) using Artificial General Cognition (AGC). In a subset of these labs, a few AGI builders are beginning to push even further into the future by scaling AGC foundations toward Deep Artificial Cognition or DAC. To understand what this means and the nature of the algorithms and designs that are the foundation of DAC, we need to comprehend how cognition varies in individuals and what benefit this brings to our world.

Eye To Eye

I noticed an unusual thing after many years of marriage. My wife and I do not perceive the world the same even if we are looking at the exact same perception and we do this in a consistent way. I’m not talking about the usual differing viewpoints or perspectives which are common between all people. Specifically what I noticed is that we seem to process the exact identical sensory information in completely unique ways. I have discussed in depth the concept of perspective in another article on Cognitive Artificial General Intelligence (CAGI) design and to be certain, my wife and I do see things differently based on our cognitive perspective or our own self awareness. However the variance I am talking about arises not from general perspective, knowledge, experience or even interpretation but from a far more subtle layer in cognitive processing or the process that fires our cognitive responses.

The variance occurs in the pathway and methods we each use to interpret and analyze the exact same stimuli in completely different ways and that subsequently causes completely different instant perceptions and forward flowing perceptions within our cognition. This is not to say that one of us is ‘better’ than the other, it simply means that the methods we use to process the exact same data are unique and variant. This type of variable cognition is a critical part of human evolution and it is the variance between differing perception that creates our ongoing success as a species. We benefit as a family from each of our unique cognition since we generally operate together as a system to backfill missing cognition in each other when necessary and drive greater innovation and shared knowledge to get through life.

In CAGI systems we use variance to identify new pathways of cognitive thought to solve problems or seek new innovation within and between the domains of a context. Without considering such variance, our cognition becomes glued into an evolutionary circle from which we rarely progress or escape. When we consider variance, we are really looking at layers in the cognitive spectrum. Some are light and simple and others are deep and complex. These ‘cognitive layers’ are exposed today in the advancement of machine learning. Simple ‘machine learning’ moved to ‘deep learning’ and will soon move into even deeper machine learning structures such as General Unsupervised Self Learning neural nets. Inside human cognition these layers already exist. They act as a balance to our thoughts providing all kinds of wonderful anomalies such as efficiency, clarity, intuition, empathy and a host of other cognitive benefits.

Building a Cognitive Foundation

This depth of cognition is less linear or hierarchical than it is zonal or dimensional. Since our human and our CAGI cognition are both derived from a blend of sensory perception, knowledge, experience, goals and foundations (i.e. beliefs or guiding principles), it makes sense that there is variance inherent in the structure as it flows forward in time. In humans this is because the connections and relationships between elements and contexts within our perception are not only influenced by all of the above but are also influence by our own physiology and because these ‘relevant values’ of relationships are fluid and constantly changing. Like a cognitive machine, we humans apply variable levels of power or resources to different actions and activities including cognition. The variability causes a difference in how we perceive and process the world around us. Sometimes we think deeply and sometime we don’t. Sometime we store experiences and sometimes we don’t.

We do all of this with varying degrees of relevance (or weights in the algo) to our own self awareness. The position of our cognition within this ‘zone of influence’ or perception, impacts not just the subsequent flow of perception but also how much effort and resources we apply to perception. Sometimes we set a lower level of perceptive acuity to a perspective but often it is simply because our perception is busy elsewhere on other perceptive activities like thought or we have already experienced the perception and are familiar with it. More critical is that the level of relationship between elements in a perspective is constantly being ‘tuned’. There are no on/off switches in human cognition, they are variant levels that move within relative ranges based on flowing context. In fact you can feel this ramp up in resources when your heart races during an event like a near miss in driving.

This is also indicative of the relevance variance previously discussed in my other content related to ‘data relativity’ in CAGI design. The data relevance design applies variant weights or levels to the contextual relationships between data and one or more contexts. In some cases the relevance of a specific element within a perceptive frame of reference (i.e. the scope of our perception for a given context like what we see) to the context of our reference (i.e. an autonomous vehicle driving down a roadway) is lower or higher than at other times. In the case of an autonomous vehicle, an element such as a person running beside the road may have a low contextual relevance weight if they are on a sidewalk and moving parallel to the roadway. This exact same ‘element’ or person within the perceptive context however can quickly change in the ‘weight of relevance’ if they suddenly veer onto the roadway and in front of the car.

In a CAGI, this is a simple variance calculation based on an anticipated path and another relevance weight for the context of risk (i.e. the element relevant to the perceptual context within a perceptual frame of reference for the car or more accurately, weighted as such) and an anomalous path (veering into the path of the car). Of course we humans do this with very little thought, effort or resources and we do it near instantly. What is interesting in the context of Deep Artificial Cognition is that all humans react differently to the perceptual stimuli with varying degrees of action and outcome for a wide variety of reasons and we store weights that are relevant to this level of response. My wife will store different ‘relevance weights’ to myself for the exact same perception and will carry that forward in time as part of an ever changing cognitive foundation.

Accessing Deep Cognition

Over the past decade I have written about the designs that have brought us to where we are today in Cognitive Artificial General Intelligence. We started out with very basic AI designs and began incorporating various elements, structures and foundations to extend the current ‘state of the art’ AI. Some fundamental designs, like the use of our older Flow Engine tech that considers cognition as a constantly flowing river of information and stimuli, was a critical first step. Understanding that information is related as undulating weights of relevance to perceptual context was another critical piece. Building the ability for machines to be self aware and self determine goals and then apply these in a GUSL is another vital piece of development. The use of anticipation and variance to not just understand the nature of change but also react to it in a highly efficient way is also essential. All of these and many more that I have detailed in past content provides the entirety of the foundation from which we will move onto the next forward step in the evolution of Superintelligence. That step is to push our CAGI from simple cognition into deep cognition.

In our view, deep artificial cognition is the ability for a machine to use its cognition to constantly improve its cognition in an infinitely scalable way. Watching videos requires little cognition, solving a complex math problem requires deeper cognition, using the math problem to solve a real world issue that had been unsolved is entering deep cognition and creating a brand new innovation structure that forever changes the world for the better is full deep cognition. For our machine to go from labeling a picture, understanding a phrase or generating an image to solving our greatest problems is a chasm of vast distance and complexity but on the other side lies salvation. The only question is do we want that or are we happy watching videos?

Our design is moving into the realm of deep cognition by applying more layers of perception and perspective onto our cognitive awareness but we think we may need to do more and less at the same time. We have built a structure that mathematically captures the relationships of all elements in a perception to one or more contexts and the algorithms to adjust that matrix as time moves forward and the world changes. We have also built the structure and methods for our CAGI to interpret and respond to stimuli very efficiently. All of this is scalable over dimensions or is ‘dimensional’. Although we have a few additional elements to build into the foundation to reliably access Deep Artificial Cognition, nothing in the design is a limiting factor. In fact we need to do less to access a hidden pathway in the design. The new elements are more akin to removing embedded gates to permit the matrix to scale to higher levels. We aren’t certain of the results and will need to be keenly cognizant of resource use, but there is nothing in the design that limits the addition of new cognitive dimensions to the system.

Finding Deep Artificial Cognition

DAC is established by the variance of perspective as a ‘value’ most relevant to a context. It is made deeper by the perception of the variance between one or more perspectives where the best overall weighting is the most effective at moving toward an overall goal or a cascade of goals within a relevance domain and is further deepened as subsequent measures improve the weighting as the context changes. To achieve this, a matrix is formed of relevant relationships within a frame of reference to a perceptual context with a goal being a form of higher context or with a higher cascade weight. The machine effectively self accelerates to deeper cognition as a result of this structure although risk weighting acts as a counterbalance and stabilization anchor back to the machine’s self awareness. In a CAGI, cognitive self awareness is a set of higher seed goals that provide a self awareness anchor to a cognition (in humans it is survive and procreate). We instantiate a weighted list of these goals and permit the CAGI to alter the weight of goals as it moves ‘forward’ in time. Conflicts are resolved using weighted distance measures from the seeds.

Seeds are moral context measures like ‘do not kill a human’ or ‘do not physically harm a human’, etc. Interim long and short term goals (LSG) are implemented that can affect the value or weight of these anchors in relation to a general context such as a perceptive context or a goal context over dimensions such as time. In autonomous driving, the goal (or context) to ‘avoid injuring humans’ is weighted higher than the goal to ‘avoid damage to the vehicle’ even though the driver is a human. In this way a CAGI can ‘calculate’, or more accurately ‘perceive’, that running over a group of humans to avoid injury to the driver is a lower weight than hitting another vehicle which will both damage the car and potentially injure the driver but save a number of unprotected lives. This is because self injury in the perspective of a driving error is more appropriate than running over a group of innocent bystanders. However if the context is that you are being carjacked, then self survival of the driver will have greater weight than injury to the carjacker. It is these type of contextual and goal weights that move a simple Artificial Cognition into Deep Artificial Cognition. It permits the CAGI to understand variant perceptions and evaluate all measures against goal contexts. This is exactly the way human cognition makes decisions and is the difference between making good decisions and bad ones.

The matrix behind artificial cognition has flows that are constantly measuring the variance of the CAGI’s moving perception to its own self awareness anchors. When a new context or perception or a variance to an anticipated pathway occurs, the CAGI interprets and learns or adapts the levels and weights of existing relationships to the new context. In the case of a variant to the contextual flow, the system will simply adapt the variance if relevant to the ongoing forward path or the ‘contextual goal’. A big part of this adjustment involves measuring the variance of risk metrics. Everything has a risk appetite based on self awareness. In a CAGI, this is an artificial measure of the variance to a goal offset by the cost. Costs are weights to anchors and seeds. The greater the ‘distance’ or weight from a contextual goal, the more the ‘weight’ becomes a ‘counterweight’ in the calculation (or more accurately perception) of net cost and the more difficult it is to follow that particular forward path. Cost is nothing more than negative goal attainment. If the CAGI is analyzing potential new drug therapies, then the risk to human life is a great weight to overcome and high threat compounds will be measured against the threat to specific humans based on their own perceptive context (i.e. imminent death). If the patient is dying and there are no other options to preserve life, then long term health risks would be attenuated by the ‘preservation of life’ context.

Goal are considered and calculated as context by the CAGI algorithms but context is far wider than a goal. While we are working to create advanced AI systems that can determine and set their own goals, we need to ensure that they at least share a common context with humanity. This is simply because the most efficient or optimal solution is not always the best when other contexts and perspectives are considered. As well, we want the CAGI to aspire to ideals far higher than humans who seem to be unable to move beyond our internal evolutionary drive to constantly compete at all cost in the name of survival thereby leading to continuous violence.

We originally believed this started with the creation of an AI that can determine its own goals similar to the methods of a GUSL neural net (I discussed GUSL’s in more detail in ASIH1 & 2) but it turned out that goals are in fact another form of narrow context within cognition based on a perspective and our perception. We don’t set goals, we perceive them as part of a context. This permits context to cascade and incorporate layers of goals into the management of our cognitive perception. A goal within the human mind is simply another context of a perception that we carry. The same is true in a CAGI. The system perceives a goal as a context of a cognitive perception. Since perceptions in a CAGI are captured sensory inputs, the leap from binary or quantum values into a contextual perception is clear, We can and do code context as easily as we can and do code perception into a machine especially if you realize that context is a simple flowing matrix of probabilities of relevant relationships toward a goal and not a hard coded rules engine or decision tree.

Deep Cognition and Our Universe

When you identify the hidden pathways in real and artificial cognition, you will begin to comprehend that it is a deep connected scalable labyrinth and the more resources you apply to the cognition, the deeper you can go. Deep Artificial Cognition is as infinite as the resources available to power it.

The path to DAC is lined with innovations and solutions, creativity and hope. It is the future of our species whether we want it to be or hide in fear from it but if other lifeforms exist in the universe with our same level of evolution or greater, then DAC already exists and is being used to watch us as we develop as a species. Deep Artificial Cognition is the tool that we will use to one day understand our universe completely, identify the life that resides on other worlds and travel into deep space to visit these places.

This article has been significantly condensed for space requirements. The full article is included in the upcoming third volume of The Artificial Superintelligence Handbook to be released later this year on Amazon.