Book Review, ‘Human Compatible’: A Book About Artificial Intelligence (AI) That Asks Some Interesting Questions – Forbes

Source: artificial intelligence

©Sergey Tarasov – stock.adobe.com

I was sent a copy of “Human Compatible: Artificial Intelligence and the Problem of Control”, by Stuart Russell. It is a book that purports to address the issue of the threat to society of artificial intelligence (AI), including some of the apocalyptic scenarios put forward by movies and by many people who don’t know much about AI. While I found that the book asks many interesting questions, I think it has problems answering them.

The first issue I have starts with the very first chapter, which included a few key phrases such as “the machines just weren’t smart enough” and “beginning in 2011, deep learning techniques began to produce dramatic advances.” The problem is that what I don’t see is why we’ve seen the advantages. I’ve mentioned it elsewhere, but the advances don’t primarily have anything to do with AI. We had many of the theories that would have led to those advances, way back in the 1980s. What was missing was hardware. The growth of cloud computing drove scale-out hardware, multiple processors and even computers working together to solve problems that even the most powerful individual computers could manage. AI and, in particular, deep learning (DL), then took advantage of scale-out to perform tasks that couldn’t be even considered back in the 80s.

On the good side, Russell does a great job, in the early chapters, of describing both the power and limitations of today’s AI systems. One thing he points out with the systems that have beaten the world’s best chess, Go, and other games players, is that the systems are still only playing 2-player games. A lot of that can be addressed by brute force lookahead. Better AI will be needed to judge interplay between more players.

One of the best chapters is number four. He takes a look at a few of the basic misuses of AI, including government surveillance and control, automated weapons, and the loss of jobs. The last is where I think the biggest risk of societal upheaval exists, and that the book doesn’t give it enough attention.

An example of that is in chapter six, where Russell writes that “collaborative human-AI teams are indeed a desirable goal.” The problem with that is collaboration isn’t as much a goal as it is an intermediate step. Given the limit of AI capabilities today, collaboration is required. The goal, from a business leader perspective, is to replace costly humans with machines, in order to increase profit. Today, we need that collaboration. It’s an interesting and important topic, but it’s only a step in the journey.

That journey is another excellent part of the book. The author points to the many people who say a super-intelligent AI is still far down the road. He does address that by pointing out two things. First, nobody knows how close or how far it really is. Second, major changes should be thought about in advance. That seems obvious, but so many people ignore challenges until there is no option but to address them.

Much of the second half of the book is taken up with what was mentioned in the previous paragraph: the super-intelligent AI. I can quibble with what that means – and how we’d know – but an important question to ask is how humans can control those machines or at least prevent them from controlling us. This is where a lot of interesting question are asked, and it’s nice to see them; but it’s also where the answers aren’t really addressed.

I also had an issue with how some of the issues are described. Chapter 9 focuses on people and AI through the lens of utilitarian theory. Here is where I thought the book didn’t go into details it should have covered. For instance, while earlier talking about the limitations of AI systems only playing 2-person games, all of the utilitarian examples in the chapter were focused in two player games. The world is far too complex and thinking that figuring out a two player solution extends to a large society seems incorrect in its simplicity.

Along with that simplicity, the book also misses how complex are our societies. He assumes training a superintelligence with the mores or ethics of an open, democratic society. Far more than half of the nations are not democracies, and most nations are working on AI systems. The definition of what is utilitarian varies, and I’m quite sure that the definition in China and Russia won’t match that of Canada or France. It can easily be seen how a super-intelligent system could harm people for a “good” that people who believe in freedom might not think of as appropriate.

In addition, what a super-intelligent system thinks of as good, regardless of where it has been trained, might not ever meet was humans expect. For instance, Chapter 8 describes the “off-switch” game and that the system would learn human preferences and follow them. The first half of that sentence makes sense, but why would we expect a system to follow those preferences?

One reason I’ve regularly talked about transparency in AI systems is that black boxes are dangerous. It’s going to be hard enough to understand what a super-intelligent system is doing (even putting aside the theoretical impossibility of understanding something more intelligent than us) without having strong oversight of the layers and nodes driving the algorithms, not to mention being able to explain the algorithm. We need to work on transparency into DL algorithms now.

I think a lot of the differences between Stuart Russell’s view and mine have to do with background. We almost overlapped at Stanford. However, while his career has been purely academic, I was there for a terminal MS to try to solve business problems. I come to the view of AI from it being a valuable tool that is still limited. While neither of us believe the apocalyptic spouting of many, his book still seems to view the problem more in a theoretical plane as in how it will really impact politics, economics, and other parts of society with direct impact to the regular person.

“Human Compatible” is a much more interesting book than the few others I’ve read in the last few years, and I think it’s good for people curious about artificial intelligence. This is a debate, an important debate, and the book will provide information to help people to begin to think about some of the deeper issues of AI and society.

Follow me on Twitter or LinkedInCheck out my website

David A. Teich is interested in artificial intelligence (AI), machine learning (ML), robotics, and other advances technologies, focused on how they help businesses

Read More

David A. Teich is interested in artificial intelligence (AI), machine learning (ML), robotics, and other advances technologies, focused on how they help businesses improve performance. He’s an analyst and consultant in those areas as well as in high tech, B2B, marketing. Previous work runs the gamut in software, including operations, development, field consulting, sales engineering and product marketing. He has worked in startups, mid-sized companies and global organizations.

Read Less