The more I learn about AI the more I value the human brain
One of the unintended consequences of working on artificial intelligence is the realisation of the marvellous complexity and awe-striking workings of the human brain. The more I learn about AI the more fascinated I am about the human brain and what it is capable of.
Artificial intelligence is one of the technologies more likely to revolutionise the world we live in. Not only because it has a wide range of applications but also because it has the potential of changing the role of humans in the world.
The advent of more and less sophisticated robots is already changing the workforce of the future making some jobs unnecessary and creating new ones. Our lives were already disrupted by the first digital revolution that turned us into more social, connected and mobile. Artificial intelligence is part of a second digital revolution that will even more profoundly impact our lives. Together with our jobs, the way we interact with objects will also change with AI powering the Internet of Things (IoT), the human + machine relationship will evolve with robots performing some tasks that we traditionally imagined exclusive to humans e.g. elderly care.
Data has been coined the new oil among other ways, and there is little doubt that it is at the centre of value generation in the coming decades. Artificial intelligence will support that.
While a lot of questions remain unanswered about the safety, ethical use and safeguards required to rip the benefits of AI while protecting society and individuals against its potentially harmful consequences, the truth of the matter is that algorithms are already amongst us in many areas of our lives.
Netflix, Amazon, Google Search, medical diagnosis, bail decisions in US courts, etc are all powered by algorithms. These algorithms output decisions or decision-making factors that influence our behaviours (e.g. shopping recommendations) or directly our life conditions to a great extent (e.g. parole decisions based on recidivism, medical diagnosis etc…).
However, I am not here today to speak about artificial intelligence but about our own.
Artificial vs human intelligence
In our pursuit of building an artificial intelligence, we seem to forget or take for granted how powerful our own brain is.
Yes, algorithms have the a capacity of processing mush faster big quantities of data, but to programme an algorithm to do something as simple as recognising a dog in a picture, which a 2-year-old can do without a doubt, takes an enormous amount of time and effort as well as tones of feedback to train the machine with a large data set.
All the activities that we programme AI to do and it manages to execute at a very basic level, our brains do constantly, concurrently and without a dedicated training effort.
As Brian Cantwell Smith says in “The Promise of Artificial Intelligence”, while AI can do many things extremely well, understanding the richness of the universe — which comes from authentically engaging “the world as the world” the way children do — is still far beyond its capabilities. He talks about AI systems lack of true contextual understanding or “meta-knowledge” — an awareness of the world and themselves within it. This is something that is a unique characteristic of human intelligence. An example of this is a self-driving car. It is able to orient itself on the road, but it is completely ignorant of the concept of transportation itself: why humans do it, the efficiencies (or inefficacies) it brings to our lives, the implications to our planet and non-human species, etc.
It is because of these shortcomings that the capabilities of AI will need to be complemented with human wisdom and creativity for the foreseeable future.
We humans might be subject to many flaws of cognitive perception and bias, but the capacities of our brains are mind-blowing. And maintaining this fascination for the human brain and its capabilities will help us maintain the very needed human-centric and human-controller approach to AI.
Most AI governance and ethical frameworks stress the need to ensure human oversight of algorithms is in place, ultimate human accountability and liability are maintained, and explainability of algorithms allows for tracking of the algorithm’s decision-making to clear human objectives and boundaries.
As I mentioned in my article “Can biased people create unbiased algorithms”, the key to our future is thinking early about how the human-machine relationship should look like.
Understanding the power of our own intelligence, and the areas in which we are superior to any artificial intelligence is a good first step to structure the human-machine relationship in a way that it respects human oversight and control as well as a ‘human-in-the-loop’ approach. This is important to achieve purely human values including trust, fairness, prevention of harm and human autonomy as defined by the High-Level Group on Artificial Intelligence (EU).
In our effort to build a parallel intelligence let’s not forget to continue improving, nurturing and valuing what is the most amazing intelligence there is, our own.