Source: Deep Learning on Medium
Future of AI is Biological
Native/ natural intelligence(NI) is organic/ biological. Today we think of Artificial Intelligence(AI) as machines, robots, and software code. Will it be the same or change to become biological artifacts?
The journey in AI so far while being inspired by the human brain is diverging from it and increasingly looks unsustainable. Unsustainable energy & data consumption to differences in representation compared to the brain’s cortex and learning bottlenecks make a case for promising stem cell-based neural networks.
Just hogs too much energy: Machines based AI can compute faster but just hog too much energy to do that. Researchers at the University of Massachusetts, Amherst recently established benchmarks for AI training (Figure below). While these models take 1000s of Watts the brain needs 20W. Yes, that’s it. AI- 1000s of watts; NI — 20W. Add to it the costs of cloud computing which run in mega server farms powered by energy from renewable & non-renewable sources.
Unsustainable for climate change: As you can see, training a large AI model emits more than 5X carbon dioxide than an average American car. That’s unsustainable.
This is a minimum: Professionals don’t tune one model but do 100s and 1000s of them to hit the most optimal AI model. The same researchers analyzed a typical model tuning in R&D and found it takes 1000s of runs. So the numbers provided above were a minimum. It multiplies by the number of runs.
Not like a brain: Having burnt all this energy what did we get? A narrow single-task problem solver. Multiple AI models need to be piped serially for complex talks. This is very different from NI which can be both serial and massively parallel. Next time you see a movie just remember you are hearing, seeing and imagining at the same time. Below is the image from a brain imaging study that showed 50 independent processes run in parallel for a visual-motor task.
Structured differently: That’s because increasingly brain research is pointing to the brain being structured differently from our deep learning architectures. For one, data in deep learning is represented in dense form while the brain seems to have a sparse representation. What does it mean? For every person, you have met you have a set of brain artifacts maintaining their faces. When you see them again that set of neurons gets triggered. It is very unlike face recognition systems that recognize a face based on a serial network of features, contours, etc. There is no one place in AI… all faces are mixed in the same network. Brain with more sparse representation can thereby just handle noise better. By masking half of your face you can’t fool the brain but by even introducing few pixels AI can be fooled as a different part of the network gets triggered.
Doesn’t learn like humans: This has huge implications on how learning happens. AI systems today need 1000s of cat images to figure out a cat. NI/ brain doesn’t. We build on our knowledge and extrapolate. This means significantly lower computing and energy.
When during your childhood did parents teach you how to work with gravity? Never. You were working with it as an infant and knew fairly early in life that things fall when you release them. NI has layers of coded in knowledge like gravity and other physics that we can seamlessly and in parallel connect with other parts of the sparse matrix thereby extending our knowledge in unknown scenarios.
Edge cases are self-defeating: This hits at the core of AI problem today of edge cases. To make a robust self-driving car it should be trained to not go off a cliff. NI was kind of born with it… we see a cliff our gravity neurons trigger and we know we will fall. AI needs to be trained. You can’t push a car off the cliff to train. You will never have enough data points to train. AI in such cases learns in simulation. There is never enough data for edge cases and that’s why they are called edge. This is a fundamental self-defeating paradox in AI that the NI structure overcomes.
The neocortex is a complex but replicated structure: NI structure in the brain is complex and still being studied. But some aspects we have general agreement on. The neocortex is the part of the brain that is associated with higher-order functions. We have come to believe that different parts of the brain process different senses. But increasingly it has been shown that the underlying structure (columns, connections, etc) of these different areas is similar. What differentiates them is the input and not the inherent structure. Change the input of visual areas with audio and it will still learn and work. Even the number of neurons per square millimeter of the cortical surface seems similar. The same tissue is repeated. Very different from AI. Can this be built?
Brain Organoids look promising: In lab using stem cells, this is already being done. See picture below of mini-brains/ brain organoids.
They are being grown in a petri dish. The tissue is similar to a brain and is a low energy consuming neural network. They self-organize into neural tissues. Scientists have made strides in repeatedly creating them and also stabilizing them to stay longer by providing vasculature/ growth of blood vessels. They have also been grown to show varied functions like retinal cells (a primitive eye) leading us to an era where sensing becomes possible. These organoids produce brain waves just like human brains do. Recently an AI system failed to distinguish between electrical patterns of 9-month-old brain organoids and premature newborn baby brains.
Biological computing is coming: These organoid tissues will self-organize and develop over months. Can they be directed to do specific activities? A parallel stream of activities suggests yes. Recently a team from UC Davis & Harvard demonstrated a DNA computer that ran 21 different programs (ex: copying, sorting, recognizing palindromes and multiples of 3, etc).
A sustainable low energy, self-organizing, brain-like learning system is increasingly possible.
Advances in molecular programming/ biological computing, stem cells & organoids, gene editing (CRISPR) for specific developments and derivation of neural computational models from these structures will take AI biological.