Human Compatible: A timely warning on the future of AI – The Next Web

Original article can be found here (source): artificial intelligence

The late Stephen Hawking called artificial intelligence the biggest threat to humanity. But Hawking, albeit a revered physicist, was not a computer scientist. Elon Musk compared AI adoption to “summoning the devil.” But Elon is, well, Elon. And there are dozens of movies that depict a future in which robots and artificial intelligence go berserk. But they are just a reminder at how bad humans are at predicting the future.

It’s very easy to dismiss warnings of the robot apocalypse. After all, virtually all of the field’s who’s who agree that we’re at least half-a-century away from achieving artificial general intelligence, the key milestone to developing an AI that can dominate humans. As for the AI that we have today, it can best be described as “idiot savant.” Our algorithms can perform remarkably well at narrow tasks but fail miserably when faced with situations that require general problem–solving skills.

But we should reflect on these warnings, if not take them at face value, computer scientist Stuart Russell argues in his latest book Human Compatible: Artificial Intelligence and the Problem of Control.

Russell certainly knows what he’s talking about. He’s a professor of computer science at the University of California, Berkley, the Vice Chair of the World Economic Forum’s Council on AI and Robotics, and a fellow at the American Association for Artificial Intelligence (AAAI). He’s also the co-author of Artificial Intelligence: A Modern Approach, the leading textbook on AI, used in more than 1,400 universities across the world.

Russell’s book is a sobering reminder that now’s the time to adjust our course and make sure AI remains in our control now and in the future. Because if super-intelligent AI takes us by surprise, it will be too late.

A realistic view of today’s AI