Original article can be found here (source): Deep Learning on Medium
John Etchemendy on Human-centered AI (HAI).
Stanford Provost Emeritus and co-Director of the Institute for Human-Centered AI (HAI), John Etchemendy, shared his thoughts about the future of AI. These are some notes from the talk.
Current AI is characterized by deep-learning, neural network architectures. It is actually narrower than what many people refer to as AI. It is “a way to program computers to do things than could not be done with traditional techniques” (e.g., algorithmic approaches).
Deep Learning as a technology is amazing, but only works in specialized domains and requires large amounts of data to work. The results are “brittle”, and break down. It might perform like a human for the first 1000 cases, but then, it will do something bizarre that a human would not.
Human intelligence by contrast is even more amazing. It can learn to do things with several orders of magnitude less data. And, it’s not brittle.
HAI attempts to:
- Leverage neurosciences and cognitive sciences to push AI forward.
- Look at the impact of AI. Need to understand what the impact is going to be on people, the workforce, society, etc. Look at how AI can improve, enrich, extend what humans can do.
AI will cause disruptions. Generally speaking, there are more jobs after a technology revolution. But there will be losers and winners. Jobs will be replaced by machines, giving us infinite (labor) productivity. Overall, jobs will become better. For example, long haul truck drivers could be almost completely eliminated in the near future. But short haul may actually increase (e.g., deliveries, the last mile). Drivers can enjoy better quality of life. So, jobs will be enhanced.
Computer-assisted education has been a disappointment. Education works best in small groups 1:1, e.g., the tutoring system at Oxford. Can human-centered AI help to improve 1:1 teaching? Can it observe facial expressions to assess if students are paying attention? Are understanding?
AI can enhance human caregivers. For example, it can observe and monitor patients and elders without having a person present at all times.
Feifei Li, the co-director, has founded AI for All, to increase participation from more diverse groups in AI.
HAI needs to include all the major disciplines, such as Law, Medicine, Sociology, etc. Educate legislators, and other influencers so that a discussion around policy and practices can be informed.
Companies focused on AI cannot bring together all the disciplines that can assist AI advancement, such as cognitive psychology, neuroscience, and so on. Discipline of AI is now as broad as all the disciplines in a university. Companies are interested in the societal impact of AI and the threats of acceptance by the population at large.
The view that “Facial recognition should be banned” is narrow. But the better way to look at a technology is to assess its value at the application level. There can be good uses; and bad ones. In any case, the technology cannot be banned.
Most people are already using AI, consciously or not, for example in shopping recommendations. But will all the labor productivity gains be owned by the ones who provided the capital? Will it be highly concentrated? How can we share that productivity? It’s a societal problem not a technology one. It should be achieved through sensible regulation; not by redistribution or heavy regulation.
There is no timeline on Artificial General Intelligence (AGI). Companies trying to do AGI will fail. Could we achieve AGI? Yes, but it might be 50 years away! We have been 20 years away from AGI for the past 60 years!
Most of the AI today is to help machines do recognition tasks (e.g., object recognition).