Hidden Dimensions

Original article was published by Rob Smith on Artificial Intelligence on Medium

Perspective in Cognitive Artificial General Intelligence

Rob Smith

Perception is a fascinating area when it comes to building Cognitive Artificial General Intelligence (CAGI). AI builders seek to bridge the connection between sensors that perceive the world and a cognition that allows an AI to process the sensory information and respond to it in some way. We humans do this from birth and we do it with ease. Right from day one, we are using our sensory apparatus to comprehend our world and learn its idiosyncrasies. The connection between our sensory receptors and our cognition is just magically there even before we are born. We don’t have to build it or take classes on how to use it, instead it just sort of happens and from conception we spend the rest of our lives growing and fine tuning the interaction between thought and stimuli.

In Artificial General Cognition (AGC), the goal is to close the chasm that exists in our machines between simple perception, which narrow AI systems can do, and advanced cognition that uses the input of our senses to move through life to accomplish goals. I have written extensively on the topic so I won’t go into too much detail about how we use artificial general cognition and sensory inputs in Cognitive Artificial General Intelligence systems to become more human like, suffice to say that the connection is absolutely critical. Instead I want to talk about one variant on the path to Artificial General Cognition that we have been contemplating and designing. I happened upon 3d art that appears as one thing from one perspective but something completely different from another. As I viewed the art installation, I began to think about other perceptive anomalies such as optical illusions and how a perception is not always what it appears to be. The perception can change based on one’s perspective. This includes non-physical cognitive perception or thought or even how our cognition processes perception.

Of course we already consider and code the variance of perspective as it relates to self awareness and perceptive location. I discussed this in the later articles in the Artificial General Cognition series & upcoming ASIH3 book. A slight change in the ‘location’, or anchor of a perception, changes the perception. The variance between the two is indicative of temporal motion or thought that extends perception across dimensions. The use of perspective change as it relates to self awareness is nothing new in CAGI building but what is new is the idea that perspective can alter the reality of a perception in different ways. An optical illusion throws the reality of what our cognition perceives out the window or it hides a ‘true’ perspective based on the position of our cognitive anchor and our sensory stimuli foundation. You might be thinking that this is a ‘problem’ for artificial cognition but that would depend heavily on your…..perspective. In our lab we treat this as both a problem and an opportunity. Not only can we embed a perception within a perception, we can use dimensional variance to gain additional information or solutions or pathways forward and we can do so by significantly cutting the resources necessary to get to a goal. To us, variant or obfuscated perception is a massive opportunity in the development of Cognitive Artificial General Intelligence. It is the equivalent of folding or warping space to affect time.

Dimensional Scalability

We humans look at a 3D art installation or optical illusion and see a single perception until such time as we alter our perspective or force our mind to overcome the perspective we are experiencing. To an appropriately equipped CAGI, it will see both the surface art of the installation as well as all the components that are hidden from human perception, like bits of garbage or plastic toys use to create the art piece or in the case of an optical illusion, an altered perception. The interesting thing is that this same technique can be used to present an alternative perception of any frame of reference and the variances between such perceptions as a new unique perception. The answer to the question of ‘if tree falls in the forest does it make a sound’ moves from ‘yes or no’ to ‘it does if you want it to’. The idea that elements and relationships between elements that form a context can be layered in such a way as to present a completely different perception depending on one’s perspective, is one of the foundations of new cyber security systems that can only be perceived by a specific individual (more about this in an upcoming article). Coupled with Brain Machine Interface (BMI) tech or new the cognitive tech in AGI Behavioral Mechanics (more about BM in an upcoming article), we may never need to remember a password again as AI systems in the future will know it’s us just like we instantly know who our trusted friends are the minute they walk into a room. Since our perception is driven by a number of factors including our knowledge, experience, self awareness, goals, sensory processing and lots of other elements, and as I have discussed before that changing the variance of relationships within frames of perceptual reference can alter a perception or a context, there is nothing within the AGC matrix that would preclude it from being able to see all concurrent perceptions within a given frame of reference. In short, the AGC matrix is dimensionally scalable.

So what on earth is all this good for? Seeing perspectives we never considered is how we find innovations and solutions to problems or answers to questions. It fascinates us as humans to be told how to overcome a variance to expose a new perspective and this is the way we solve, deduce, innovate, create, learn, etc. To code this within a CAGI involves developing the cognition of an artificial system using all the available sensory inputs it has and then moving forward along pathways that to a human cannot be perceived and therefore not added to our knowledge, analysis and response systems. This opens an immense world of opportunity and innovation that we humans struggle to comprehend. If you ask a human to end poverty or violence in the world, they will invariably begin a lengthy dissertation on the difficulties faced in doing so. To a CAGI, consideration of the seemingly infinite pathways to achieve these goals is not an issue and the process of building and implementing a method to achieve the goal trivial. To humans it is insurmountable. By perceiving all the stimuli and knowledge in the world, a CAGI will systematically move forward on multiple pathways that coalesce into a cascading goal attainment.

Human Cognition is Limited but Artificial Cognition is Not

One thing that is true is that our human cognition has evolved to optimize our survival within our environment. This means that our cognition has an evolutionary bias embedded within it. An artificial cognition will be subject to the same limits if we build it based on our bounded and limited human cognition that depends heavily on our human perceptions and experience based on our limited human sensory abilities and warped by our human goals. Artificial General Cognition however has the option to be built free from these human boundaries and limitations. If we are cognizant that the current foundation on which our CAGI rests is deeply flawed by human bias, we can work to limit the impact. One of the keys is to permit our artificial cognition to learn on its own using its own sensory world. Current narrow AI using neural nets to identify the features of two dimensional pictures or three dimensional perception is far less effective than an AGI with an infinite number of sensors perceiving not just our own world but other worlds and doing so across multiple dimensions, such as time or the space between perceptions, and sharing the knowledge openly between systems.

Even within our own world is a vast depth of perception that is beyond our human ability. We are therefore only able to bridge the gap between a portion of our reality that we as humans can sense. More importantly, we are driven by our desire for evolutionary survival to control and limit access to our knowledge and even create false knowledge to gain power over others. This is present in the form of everything from strict control over information access, censorship and the production of propaganda. The open sharing of thoughts and idea isn’t the problem, it’s the limitations placed by others on that sharing. If advanced AI systems are held under the same constraints and biases, then the ability to attain significant benefit and innovation from cognitive systems will be limited, stunted and only marginally effective. However CAGI can be built to be far more capable than human cognition without all the human baggage that holds us back. CAGI is also infinitely scalable and can use the power of all perspectives in arriving at solutions or findings instead of simply arriving at the most efficient solution within a narrow domain. To a narrow AI, the optimal solution to ending climate change would be eliminate all the humans and animals. Optimized but hardly practical if one of your perspectives is the human point of view. CAGI can consider all perspective and can do so dimensionally or over all temporal domains at once.

Sensory Load Optimized Angulation Dispersal

The act of multiangulation in Cognitive Artificial General Intelligence perception is what moves machines far closer to human level cognition. Multiangulation is simply using vast sensory and cognitive processing capabilities of Artificial General Cognition to effectively evaluate the variance between disparate perceptions near instantly. It is how such advanced systems will process the world they sense and all the information available to them in an exceptionally fast and efficient manner. It is important to note that this process does not just occur within our external sensory perception, it is an inherent part of our internal human cognition as well because perceptive variance drives us to innovation, creativity, evolution and comprehending forward pathways through life.

A big part of the effectiveness of multiangulation relies on the ability of CAGI to apply such methods as a very light touch inside the system. For systems such as autonomous driving. there is no unlimited power supply or processing power. The ability to process data from a multiangulation perception capable of managing or calculating the variance of not just of its own perspective but also the perspective of other systems, must be done ‘in stream’ (perception is flowing). Two methods help in achieving highly optimized processing and the first is Dimensional Variance Phase Shift (DVPS) methods that work to analyze the rates of change of ‘knowledge’ information and the second is Sensory Load Optimized Angulation Dispersal or SLOAD. In the first method, the systems use algorithms to constantly measure the rate of change in flowing sensory information as opposed to hard change in the basic entity metrics. These high level processing methods vastly improve the speed with which CAGI can process anticipated information while vastly reducing resource requirements and providing more information to the system (i.e. the system can back calculate for detailed entity metrics if required). The second group of SLOAN methods distribute high level variance data about ‘relevant’ elements within the machine’s perceptive frame of reference. For example an autonomous car can ‘share’ the trajectory and forward motion path of a nearby vehicle to another vehicle that cannot perceive the second car if the forward motion path is of relevance (i.e. the wheel has come off the car or it is driving erratic and will cause an accident in the path of the vehicle that cannot see the event). This speeds up the information processing by the ‘receiving’ autonomous vehicle thanks to the structured relevance methods implemented by the first autonomous vehicle using SLOAD methods. These methods permit the second vehicle to use only the relevant portion the entirety of the second cars perspective as opposed to the entire perspective.

A World Full of Different Perspective

Perspective plays a big role in human cognition and presents a massive opportunity in Cognitive Artificial General Intelligence through the use of the variance between perspectives. Like viewing 3D art, there is value in changing your perspective but unlike a human, an AI system can see multiple perspective at once and calculate the variance and the rate of variance change between them and use this information to produce a forward cognitive pathway to create greater innovations, solve more complex problems and achieve greater success in attaining goals.

Perspective is also what both binds and divides humans. The ability to comprehend differing perspective and find a common pathway of relevance is the true art of higher intelligence. It begins by understanding goals and biases and then working toward solutions that coalesce those perspectives into the more optimal attainment of all goals.

If only humans could be as open, fair and cooperative as our future cognitive machines, what a wonderful world it would be.

Additional and expanded content from this article will be published in the global launch of the Artificial Superintelligence Handbook III scheduled for release later this year on Amazon.