A couple months ago, Google revealed a jaw-dropping demo of its latest technology at its annual developer festival, Google I/O. It’s called Google Duplex, and this is how it works:
It’s extremely impressive. The individual challenges of parsing human speech over phone calls with varying audio quality, interpreting natural language into machine-understandable text, producing sensible responses that adapt to the flow of the conversation, and synthesizing a believable human voice are all so staggeringly difficult on their own that seeing a program accomplish it all at the pace a real human would is, to put it lightly, astounding. Many news outlets started publishing articles asking questions like “Did Google’s Duplex AI Demo Just Pass the Turing Test?”, and some have even answered “Yes, it did.”
They’re wrong, at least for now.
The Turing Test: what it is and what it isn’t
“I propose to consider the question, ‘Can machines think?’” — Alan Turing in “Computing Machinery and Intelligence”
In 1950, Alan Turing introduced the concept of what is now known as the Turing Test in a seminal paper on artificial intelligence, Computing Machinery and Intelligence. Answering the question “Can machines think?” proved to be a messy philosophical debacle, relying too much on one’s definition of “thinking” and other subjective nuisances, so instead he devised an experiment, which goes like this:
Turing proposed that, if a computer were able to pass the Turing test reliably, it would be considered an intelligent being. The proposition relies on the notion that holding a sensible conversation is, while trivial for the average human being, actually an incredibly mentally-demanding task.
Turing’s not alone in this assumption. Descartes argued in his Discourse on Method (1637) that a machine could never modify its responses to whatever was said in its presence, “as even the most stupid men can do.” Daniel C. Dennett, a philosopher of mind and cognitive scientist, proposed the “quick-probe assumption”, the idea that anything that passes the Turing test displays intelligence over an indefinite number of domains. Terry Winograd, a leader in artificial intelligence, illustrates this concept with two sentences that differ by a single word. Here’s the first one:
The committee denied the group a parade permit because they advocated violence
Here’s the second one:
The committee denied the group a parade permit because they feared violence
Now answer this question: in each of these sentences, who is “they”?
Holding a conversation requires a vast amount of background knowledge and abilities. A computer can’t answer a question like that without knowledge of parade behaviors, committees, laws, etc. This is why Google Duplex didn’t beat the Turing test. The Turing test, the real Turing test, is far too difficult for any AI today. Duplex managed to emulate a human in two very specific situations: making a hair appointment and booking a reservation at a restaurant (it can also get holiday hours). Ask Duplex about its thoughts on America’s Got Talent and it won’t have a clue what to say.
Duplex actually isn’t the first to simulate conversations in a very narrow field of topics. LUNAR was a program designed to answer questions about moon rocks and provided correct, appropriate responses to about 90% of questions posed by geologists and other experts. The PARRY chatbot managed to pass a modified version of the Turing test about 50% of the time (a result consistent with random guessing on the part of the human judge) by simulating a person with paranoid schizophrenia. While today’s artificial intelligence doesn’t have the abilities “over an indefinite number of domains” required by the true Turing test, it can provide results in very specific areas. And, in a lot of cases, that’s all it needs to do.
Artificial Intelligence, and what it means for us
“Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know.” — Stephen Hawking
First off, let’s make sure we’re all on the same page. Artificial Intelligence is not science fiction. It’s not Terminator, it’s not Robocop, it doesn’t even have to be a robot. So what is it then? Well, that’s a bit of a broad question, but for our purposes, we can simplify AI down to three different categories:
- Artificial Narrow Intelligence (ANI): AI that can perform as well or better than a human in one, specific area. Deep Blue, the computer that beat Gary Kasparov at chess, is a famous example of ANI.
- Artificial General Intelligence (AGI): AI that can perform as well as a human across all areas. An AGI would be able to beat the true Turing test, but so far none have been created.
- Artificial Super Intelligence (ASI): AI that can perform better than a human across all areas. ASI is an extremely controversial subject and beyond the scope of this post.
Right now, we live in a world filled with ANI. ANI can name what song you’re playing, translate foreign languages, and tag you in your Facebook photos. It‘s limited, but not as much as you might think. ANI can get really good at specific tasks through machine learning and then combine with a bunch of other ANI to do more complex tasks. Creating an AI to “call and schedule an appointment with a hair salon” is difficult, but it becomes easier when you break it up into smaller tasks like “identify words and phrases in audio” and “produce a sensible response”, and then break up those tasks into even smaller tasks, over and over again until you finally end up with something manageable. This is a lot like how most computer programs are written: you write a bunch of smaller functions and then combine them into one larger program. With this mode of thinking, it’s not hard to see how AI could be made to do just about anything, including most of our jobs.
You‘re probably aware of the jobs that are already being automated. Robot arms create cars from scratch on the assembly line, trader bots buy and sell more shares on the stock market than humans do, and self-checkout terminals are slowly making cashiers obsolete. And you can probably think of a couple more jobs that could be automated. Self-driving cars could become self-driving trucks, a malicious version of Google Duplex could become the world’s most terrifying telemarketer. But there’s probably a lot of jobs that you never knew could be automated, and that’s where the story gets interesting.
Remember Watson? He’s the supercomputer that beat two of the greatest champions in Jeopardy history at their own game. He’s also quite possibly the greatest doctor in the world. Watson has consumed over 2 million pages worth of medical evidence, training cases, and clinical training and uses the knowledge to help diagnose patients more effectively. Sure, Watson can’t get the answer right 100% of the time, but human doctors make mistakes all the time. Watson doesn’t need to be perfect; he just has to be better than humans.
Think a degree from a fancy law school will keep you safe? Think again. Trials are just a small fraction of a lawyer’s job; most of it is boring paperwork. Reviewing hundreds of documents to find evidence for a case; drafting the same wills, trusts, and other contracts over and over again; all of these tasks can be automated and all of these tasks already are being automated. Lawyers won’t become obsolete any time soon, but there won’t be a need for nearly as many as we have today.
I could go on and on about how many different professions could be automated. AI could be used to replace movie stars with their CGI recreations, animate cartoons with just a script, and compose original musical compositions. It’s already been used to write more than 800 articles for the Washington Post. In fact, I’ve even used it to write this very post.
I’m kidding, but that would’ve been really cool, wouldn’t it?
At this point, it should be clear that for a lot of people, it’s not a matter of if your job will be automated; it’s when. Left unchecked, AI could displace as many as 73 million workers in the U.S. alone by 2030. Yes, 2030. Not 100 years from now, not even 50 years from now. Chances are, the AI revolution is going to happen in your lifetime, which is why you need to be ready for the consequences.
Artificial intelligence is the most important topic of the 21st century, possibly in human history. The status quo is safe for now, but AI will advance exponentially over the next several years and improve faster than we realize. We can’t just stop people from developing AI — it’s gonna happen sooner or later — but we can try to prepare ourselves to live in an automated world. The first step is to know what’s gonna happen and what your options are. Perhaps you think the government should implement a universal basic income, so that the newly unemployed will still be able to earn a living. Or maybe you think robot workers should be subject to income tax. Maybe you think humans need to merge with machines in order to stay relevant. Whatever you do, don’t just stop reading here and call it a day. Make the effort to get informed about artificial intelligence and educate others, because there’s just too much at stake to ignore it.
Source: Deep Learning on Medium