What may have once been viewed as a scene from a science-fiction movie, conversational artificial intelligence (AI) assistants are becoming the norm and not the anomaly. While AI has historically been a field of technology that was once limited to labs and specialized environments, we now have the ability to effortlessly talk to our technology — smartphones, TVs, and even our cars. Siri, a product of SRI International, has become a household name due to its ability to interact seamlessly with humans, and over the past few years, practical applications of machine learning and AI have spread to impact many facets of human life. Medicine, marketing, manufacturing, banking (‘fintech’) and education are just a few of the sectors being transformed by AI.
Although many uses of AI technology, like the medical treatment models of insurance companies, are hidden behind the scenes — consumers now interact daily with virtual assistants and AI chatbots.
In our article about Internet trends over the next decade, Manish Kothari, President of SRI International, touched on how popular virtual assistants are enticing people to speak with computers as if they were human. Dialog is the basic human capability for communication and problem solving, and computers are getting into the act.
At AI World 2019, William Mark, PhD, President, SRI; Karen Myers, Director, SRI AI Center; and Sasha Caskey, CTO & Co-Founder, Kasisto (an SRI Ventures portfolio company), explained the mechanics of conversational AI technology, and its implications for society. While the full panel discussion, hosted on the SRI YouTube channel, is insightful on its own, it raises a couple of challenges that are worth exploring in more detail. Let’s look at those below.
When it comes to realistic AI, it’s all about the context
Could conversational AI assistants pass off successfully as humans? Indeed, the vision isn’t new. The 2013 movie, Her introduced the idea of a human forming an emotionally charged bond with a virtual assistant. While the film is fictional, it does beg the question: is it possible for AI engines to get that close to acting in a human-like way? The short answer is it’s all about the context. It is technically possible for a virtual assistant to converse like a human, but if you ask a question that is out of the chatbots frame of reference, you’re likely going to end up with more frustrations than answers. Kasisto for example, specializes in helping enterprise financial institutions roll out chatbots to enable customers to complete routine banking transactions using voice or text messages. As Karen Myers of SRI mentioned on the panel, the history of interactions with a particular customer as “context becomes very important.”
As voice assistant technology improves, businesses are left to wonder just how human-like their chatbots should be. The consensus of SRI panelists at AI World is that when chatbots are used in a commercial setting, businesses have an ethical duty to distinguish them clearly from human employees. William Mark observed that while it is technically possible to make conversational AIs indistinguishable from humans, “there’s something important about the connection, emotion and empathy that human beings have with each other, even over the phone…” that should be preserved.
AI isn’t going to replace entire customer service teams anytime soon, but startups such as OTO, an SRI Ventures portfolio company, are enabling businesses to elevate the human qualities of customer conversations with AI-based speech analysis. As long as chatbots and virtual assistants identify themselves as such, advanced innovation in human-to-machine dialog can be helpful. As Sasha Caskey explained, “there’s a certain humanity we want to bring back.”
Looking beyond chatbots and conversational AI in B2B settings, it’s worth considering whether lifelike chatbots are actually useful or even wise for consumer applications. In 2017, Wired reported on Amazon introducing “speechcons” — interjections like argh; cheerio; *d’oh;* and even bazinga. While they might seem to be just a novelty, the article explains how Amazon could potentially use speechcons to increase engagement and help drive more online purchases. Bazinga!
Where do we go from here with conversational AI?
Despite numerous advances in AI technology overall, including natural language processing (NLP), there’s still plenty of room for improvement in conversational AI technology.
The relative success of a chatbot should be measured on a case by case basis as every business has unique goals and objectives. VentureBeat outlines a few metrics such as: revenue growth, satisfaction rate (measured using the Net Promoter Score) and confusion triggers.
App developers and technical professionals can measure the success of their AI models using a variety of methods, most of which can be found in the comprehensive paper Classifiers and their Metrics Quantified. Sasha remarked, “As conversational agents become more prevalent, they become more specialized” even within a single bank. Karen stated, “We’re still a way away from having our intelligent agents act as delegates for us — off doing more open-ended tasks.”
Lack of self-awareness is another limitation of conversational AI currently. Specifically, AI engines are not able to recognize their own technical weaknesses or narrow focus and communicate those to humans. Chatbots do not understand when a conversational style is not effective or not accomplishing the human’s intended goal. In contrast, when a human teacher is trying to communicate a lesson, they can recognize when a student doesn’t understand something and adjust the lesson accordingly. Today’s chatbots and virtual assistants don’t have that capability and cannot deal with ambiguity. The consensus of industry experts is that AI is a long way from becoming self-aware.
In the meantime, humans can take advantage of conversational AI engines by using them to answer basic questions or perform simple tasks such as scheduling meetings between colleagues. AI can also augment human capability on more complicated tasks, such as routing a caller to the right department of a bank based on their initial interactions via phone or text, and then having a human take over from there. Challenges with conversational AI persist, but one thing is for sure: we are not far off from being able to ask contextual questions of our virtual assistants and receive “thoughtful” answers.