Original article was published on Deep Learning on Medium
Exploring the Impact of Artificial Intelligence: Prediction vs. Judgment
Recent progress in machine learning has significantly advanced the field of AI. Please describe the current environment and where you see it heading.
In the past decade, artificial intelligence has advanced markedly. With advances in machine learning — particularly ‘deep learning’ and ‘reinforcement learning’ — AI has conquered image recognition, language translation and games such as Go. Of course, this raises the usual questions with regard to the impact of such technologies on human productivity. People want to know, will AI mostly substitute or complement humans in the workforce?
In a recent paper, my colleagues and I present a simple model to address precisely what new advances in AI have generated in a technological sense, and we apply it to task production. In so doing, we are able to provide some insight on the ‘substitute vs. complement’ question, as well as where the dividing line between human and machine performance for cognitive tasks might lie.
At the core of your work is a belief that recent developments in AI constitute advances in prediction. Please explain.
Prediction occurs when you use information that you have to produce information that you do not have. For instance, using past weather data to predict the weather tomorrow, or using past classification of images with labels to predict the labels that apply to the image you are currently looking at. Importantly, this is all machine learning does. It does not establish causal relationships and it must be used with care in the face of uncertainty and limited data.
In an economic sense, if we were to model the impact of AI, the starting point would be a dramatic fall in the cost of providing quality predictions. As might be expected, having better predictions leads to better and more nuanced decisions. In terms of organizations embracing AI, there has been a lot of activity and discussion — along with a lot of hype. The major tech companies — Apple, Google, Facebook — have been implementing AI in their products for a few years now, and they continue to roll it out. For the rest of us, not much has happened yet — but there is huge simmering potential and opportunity. Over the next decade, I believe we will see a lot of activity, but we are still at the very earliest stages of this.
AI cannot establish causal relationships, so it must be used with care in the face of uncertainty and limited data.
You have said that AI is really good at some things, and not at all good at others. Please explain.
People needn’t worry: Artificial intelligence is not about replacing human cognition. As indicated, AI really only ‘does’ one aspect of intelligence, and that is prediction. The complexity of AI lies in its algorithmic coding, not so much in its results. Basically, AI provides us with the ability to make use of the torrents of big data that are flowing into today’s organizations by using complex arithmetic to ‘crunch’ the data and make predictions from the patterns that emerge from it. As it advances, we’ll be able to input even more data, and AI’s breadth of understanding and ability to learn from data will increase. But it is important to remember that AI is always restricted by what it knows.
Having said that, AI is often able to nail a prediction problem in ways that humans cannot. For example, it can now quickly identify the content of images — so quickly that it can use your smartphone’s camera to confirm that it is really you turning on the phone before unlocking it; it can take a string of words in French and translate them into English at speeds human translators could never hope to achieve; and it can take long, complex legal documents and identify sensitive information — which might take a paralegal hundreds of hours. All of this is great news for organizations, but it’s also all the news — because as indicated, that is all AI does.
The challenge for leaders is to figure out, ‘What uncertainty can AI take away for us?’ Can it address something that is really important to the decisions you make, or would it only provide something that is ‘nice to know’ but not essential? For example, a fortune teller does you no good by telling you what will happen next week if there is nothing you can do about it.
Despite all of these advances, you believe humans still have some very important advantages over machines. Please explain.
Humans possess three types of ‘data’ that machines never will. First, we have our five senses, which are very powerful. In many ways, human eyes, ears and noses still surpass machine capabilities. Second, humans are the ultimate arbiters of our own preferences. Consumer data is extremely valuable because it gives prediction machines data about those preferences. Third, privacy concerns restrict the data available to machines. For as long as enough people keep their financial situations, health status and thoughts to themselves, the prediction machines will have insufficient data to predict many types of behaviour. As such, our understanding of other humans will always demand judgment skills that machines cannot learn.
In a recent paper you looked at precisely which types of human labour will be substitutes versus complements to emerging technologies. Please summarize your key findings.
For one thing, we believe that humans still have a considerable edge over machines at dealing with ambiguity. AI is good at making predictions in cases where there are ‘known unknowns’ — things we admit we don’t know — but it is no good at all where there are ‘unknown unknowns’ (unforeseeable conditions) — and it can be sent down the wrong track entirely if there are ‘unknown knowns’ involved (things that are known but whose significance is not properly appreciated).
Also, while AI will continue to grow in scope, in the coming years it is unlikely to be able to make value judgments or predict anything with data that is not clearly and logically linked to the core data set (the ‘known knowns’). Here’s an example from daily life: London taxi drivers have to pass a rigorous test on the best routes around the city before getting their licence. Not surprisingly, they have been significantly impacted by the arrival of Uber drivers who rely on AI-driven GPS mapping. However, if you get into a London cab and say, ‘Take me to that hotel near Madame Tussauds, where Justin Timberlake stayed last week’, the Uber driver’s GPS won’t be able to help you — but the cabbie just might be able to. As leaders scan the horizon for threats and opportunities, it is very important to have a solid appreciation for what AI can and cannot do.
Talk a bit about how reliant prediction machines are on good data.
The current generation of AI technology is called ‘machine learning’ for a reason: These machines learn from data, and more and better data leads to better predictions. But data can be costly to acquire, and thus, investing in it involves a trade-off between the benefit of more data and the cost of acquiring it.
Our understanding of other humans will always demand judgment skills that machines cannot learn.
To make the right data-investment decisions, leaders must consider the three ways in which prediction machines use data: training data is used to generate an algorithm in the first place; input data is fed to the algorithm and used to produce a prediction; and feedback data is used to improve the algorithm’s performance over time, as it ‘learns’. How many different types of data does your company need? How frequently do you need to collect it? These are just some of the questions every leader should be asking. It is critical to balance the cost of data acquisition with the benefit of enhanced prediction accuracy.
Tell us a bit more about the role of human judgment with respect to AI.
As indicated, prediction machines cannot provide judgment; only humans can do that, because only we can express the relative rewards from taking different actions. Many decisions today are complex and rely on inputs that are not easily codified, and judgment is one of them. Whereas prediction involves ‘information regarding the expected state of the world that can be easily described’, judgment relies on factors that are indescribable and more qualitative in nature — like emotions and experience. Figuring out the relative payoffs for different actions in different situations takes time, effort and experimentation, none of which can be codified.
Objectives in today’s world are rarely one-dimensional. Humans have their own inner knowledge of why they are doing something and why they give different weights to various elements of it; all of that is subjective. As AI takes over prediction, we believe humans will do less of the combined prediction-judgment routine of decision-making and focus more on the judgment role alone. As indicated, AI works best when the objective is obvious. When the objective is complex and difficult to describe, there is no substitute for human judgment.
You and your colleagues also looked at prediction’s effect on decision-making. Please describe it.
We assumed that two actions can be taken by a decision-maker in any situation: a safe action and a risky action. The safe action will generate an expected (and predictable) payoff, while the risky action’s payoff depends on the state of the world. If the state of the world is good, the payoff will be X; if it is bad, the payoff will be Y. Which action should be taken depends on the decision-maker’s prediction of how likely the good rather than the bad state of the world is to occur. As prediction becomes better and better, decision makers will be more likely to choose riskier actions.
So, we will all be making more decisions — and riskier decisions — over time?
Yes, because as decisions become more complex and we get more help with the parts of them that involve prediction, the things we make judgments about can increase in complexity. As a result, the average person is going to be making different types of decisions in different contexts, and making them more often. No matter what type of work we do, we only have as much time to make decisions as we’ve got time, and from that perspective, it can work well to have a machine make more decisions to free us up to do other things. In general, we will see a greater variety of decisions and actions being taken.
You have studied the workplace ramifications of all this. Tell us how it will effect, say, an HR manager.
If you think about it, making good predictions is the core of a good HR manager’s job. These managers must predict whether a candidate’s CV makes them worth interviewing and whether based on the interview, the candidate is appropriate for the job, amongst many other things. While a job that involves hiring people seems as though it demands human intuition, objective statistics have actually proven to be more effective. In a study across 15 low-skilled service firms, my Rotman colleague Mitch Hoffman along with Lisa Kahn and Danielle Li found that firms using an objective and verifiable test alongside classic interviews gained a 15 per cent jump in the tenure of hired candidates, relative to those using interviews alone.
As indicated, good predictions feed off of good data, and in the realm of HR, much of the required data is available. Based on it, increasingly complex algorithms will be generated to help HR departments with their predictions — which could reduce bias and errors and save lots of time in evaluating people. AI will almost certainly impact HR jobs, along with many others. But there is good news, too: As jobs transform to accommodate new technology, the real human element behind them will be exposed. It may well be, for instance, that a human face will still be required to deliver hiring or firing news — even if that news is machine-generated.
As we scan the horizon for threats and opportunities, it is very important to have a solid appreciation for what AI can and cannot do.
What does it mean when a company like Google or Microsoft says it is ‘AI first’?
My economist’s lens knows that any statement of ‘we will put our attention into X’ involves a trade-off: Something will always have to be given up in exchange. Adopting an AI-first strategy is a commitment to prioritize prediction quality and to support the machine learning process — even at the cost of short-term factors such as consumer satisfaction and operational performance. That’s because gathering data might mean deploying AIs whose prediction quality is not yet at optimal levels. The central strategic dilemma for all companies is whether to prioritize that learning or shield customers from the performance sacrifices that it entails.
Consider a new AI version of an existing product. To develop the product, you need users, and the first users will likely have a poor experience, because the AI needs to learn. A company with a solid customer base could have some of those customers use the AI version of the product and produce training data; however, if those customers are happy with the existing product, they may not be willing to tolerate a switch to a temporarily-inferior AI product.
This is the classic ‘innovator’s dilemma’ that Harvard Professor Clayton Christensen wrote about, whereby established firms do not want to disrupt their existing customer relationships, even if doing so would be better for them in the long run. AI requires a lot of learning, and a start-up may be more willing to invest in that than their more established rivals.
Due to all sorts of biases, human judgment is deeply flawed. Will AI lead to better decisions, overall?
As humans, our prediction rates are very low, for all sorts of reasons related to an endless list of unconscious biases. Maybe AI will make our decisions better — but remember, that means someone has to define and teach the AI what ‘better’ means. Since we’re so bad at working that out, it’s going to be interesting; but I’m definitely optimistic. With better predictions come more opportunities to consider the rewards of various actions — in other words, more opportunities for judgment. Better, faster and cheaper prediction will give the average human more important decisions to make.
An AI Dictionary for Leaders
Put simply, autonomy means that an AI construct doesn’t need help from people. Driverless cars illustrate the term in varying degrees. Level four autonomy represents a vehicle that doesn’t need a human inside of it to operate at full capacity. If we ever have a vehicle that can operate without a driver, and also doesn’t need to connect to any grid, server, GPS, or other external source in order to function, it will have reached level five autonomy. Anything beyond that would be called ‘sentient’, and despite the leaps that have been made in the field of AI, the singularity (an event representing an AI that becomes self-aware) is purely theoretical at this point.
The most important part of AI is the algorithm. These are math formulas and/or programming commands that inform a regular non-intelligent computer on how to solve problems with artificial intelligence. Algorithms are rules that teach computers how to figure things out on their own.
Machine learning is the process by which an AI uses algorithms to perform artificial intelligence functions. It’s the result of applying rules to create outcomes through an AI.
When the rules are applied, an AI does a lot of complex math. Often, this math can’t even be understood by humans, yet the system outputs useful information. When this happens it’s called ‘black box learning’. We don’t really care how the computer arrived at the decisions it’s made, because we know what rules it used to get there.
When we want an AI to get better at something, we create a neural network that is designed to be very similar to the human nervous system and brain. It uses stages of learning to give AI the ability to solve complex problems by breaking them down into levels of data. The first level of the network may only worry about a few pixels in an image file and check for similarities in other files; once the initial stage is done, the neural network will pass its findings on to the next level, which will try to understand a few more pixels, and perhaps some metadata. This process continues at every level of a neural network.
Deep learning is what happens when a neural network gets to work. As the layers process data, the AI gains a basic understanding. You might be teaching your AI to understand cats, but once it learns what paws are, that AI can apply that knowledge to a different task. Deep learning means that instead of understanding what something is, the AI begins to learn ‘why’.
Natural Language Processing
It takes an advanced neural network to parse human language. When an AI is trained to interpret human communication, it’s called natural language processing. This is useful for chat bots and translation services, but it’s also represented at the cutting edge by AI assistants like Alexa and Siri.
AI and humans learn in almost the exact same way. One method of teaching a machine, just like a person, is to use reinforcement learning. This involves giving the AI a goal that isn’t defined with a specific metric, such as telling it to ‘improve efficiency’ or ‘find solutions’. Instead of finding one specific answer, the AI will run scenarios and report results, which are then evaluated by humans and judged. The AI takes the feedback and adjusts the next scenario to achieve better results.
This is the very serious business of proving things. When you train an AI model using a supervised learning method, you provide the machine with the correct answer ahead of time. Basically the AI knows the answer and it knows the question.
This is the most common method of training because it yields the most data and defines patterns between the question and answer. If you want to know why something happens, or how something happens, an AI can look at the data and determine connections using the supervised learning method.
With unsupervised learning, we don’t give the AI an answer. Rather than finding patterns that are predefined, like ‘why people choose one brand over another’, we simply feed a machine a bunch of data so that it can find whatever patterns it is able to.
Once an AI has successfully learned something, like how to determine if an image is a cat or not, it can continue to build on its knowledge even if you aren’t asking it to learn anything about cats. You could take an AI that can determine if an image is a cat with 90 per cent accuracy, hypothetically, and after it spent a week training on identifying shoes, it could then return to its work on cats with a noticeable improvement in accuracy.