Source: Deep Learning on Medium
At any given moment, an individual brain cell receives chemical signals from hundreds if not thousands of connected neurons in its network. As excitatory and inhibitory neurotransmitters bathe the receiving cell’s receptors, they coax the neuron to fire or to remain inactive. Essentially, the receiving neuron reaches a tipping point, the magic combination of inputs that prompts the cell to fire. A pulse of electricity courses along the cell body, which releases the neurotransmitters of its own, encouraging nearby neurons to fire or remain at rest.
Neurons don’t fire in isolation. They fire in networks…
The Brain Electric by Malcolm Gay
One summer when I was just a kid I was sent off to sailing camp. After a few days of training we had our first race, two young sailors per boat. I think there was probably fifteen boats. Earlier that week we had learned all about winds and tack strategy. As the little fleet of sailboats took off it occurred to me that each tack took time which slowed the boats forward progress, so I quickly calculated that the most efficient number of tacks per unit of time for the distance to the other side of the Pamlico Sound. One tack! After describing the math, that our speed would more than make up for the distance vs. the competitor’s loss of speed per tack, I was able to convince my boat mate to take the gamble. All the other boats we tacking back and forth every few minutes as we headed out. Halfway across the Sound, and a good distance from the fleet, we tacked and as we sped toward the shore marker we were way ahead of the other teams.
Just as there was value in minimizing the tacks in the race to get to the goal, the marker, first, there is value in artificial general intelligence and keeping the focus on the underlying mechanics of human thought, neuroscience. There is value to artificial general intelligence, or AGI, in the study of how biological neurons and neuron sets work to support thought, the AGI goal.
Thought, with awareness, is different than an approximation model. Just as Sophia, the first robotic citizen, is only a show, a model, limited in capabilities by the realities of the constraints of her construct, so artificial intelligence deep learning (AI DL) is. That is to say, the underlying model of a neuron for deep learning is significantly different from the neuron of the brain, and hence the result of a complex deep learning model attempting to model thought and awareness will only ever be a model limited by the construct of its underlying architecture, the neuron model. I write this with the most sincere respect for the deep learning domain. The difference that I’m calling out here is the neuroscience observation of the subtleties and complexities inherent in biological neurological systems that support general human thought and the construct of intelligence.
And this very basis is where investigations for artificial general intelligence, AGI, need to start. That is, one approach among many should be with the model of the human neuron, complete with the DNA and supporting elements definition. The incredible subtleties in the story of how a human neuron exists within sets of neurons and how thought is derived out of 86 billion of neurons and somewhere in the range of 100 trillion synaptic connections is how we get from models to a true AGI.
Granted, while humanity does not have the computing power nor architecture to accomplish this objective today, humanity is on a trajectory that will enable such an accomplishment at a more or less calculated point in the near future by evidence of technology trends. That is, given this historical trend of computer advancement, it is possible to predict within an error range when specific advancements currently in development will become commercially available, witness qubit based computers, and so on, to arrive at a point in time in the near future where we may very well have the requisite computer infrastructure to support a brain model. Yet will we have the appropriate neuron system model when we get there?
Witness Geoffrey Hinton’s latest AI architectural advancements with capsules, coming even closer to how the brain works with sets of neurons. Make no mistake, the trend is and has been toward using the brain as the model example. As is the definition of a model, the model is only and only ever will be by definition of the model, some degree of approximation. And I would further argue, the greater the abstraction in the model the further away from the object being modeled. And this is the point. We have arrived at a point in time where we are realizing and articulating the shortcomings of deep learning and have the compute trend to support actual biological neuron modeling, vs. abstract general approximations.
We are at the front door of new possibilities!
“A journey of a thousand miles starts beneath one’s feet”