Conversational AI for Developers: The Four Essential Layers in your Stack

Original article can be found here (source): Artificial Intelligence on Medium

Conversational AI for Developers: The Four Essential Layers in your Stack

Building an AI assistant is hard. Building an AI assistant that not only handles questions and executes tasks, but engages in a flexible back-and-forth dialogue can be tremendously difficult. It requires machine learning, engineering best practices, powerful tools, and data in the form of valuable user conversations.

You might think that building great AI assistants means building everything from scratch. But you don’t need to start with the basic building blocks to achieve something that’s performant, enterprise-ready, and flexible enough to fit your needs. An open source framework like Rasa is the middle ground between building everything yourself or using a SaaS framework that you can’t customize to your use case and training data.

Rasa has been shipping open source software that has empowered thousands of individual developers, startups, and Fortune 500 companies to create AI assistants. Rasa has released applied research like the TED policy, and DIET NLU architecture in developer friendly workflows.

Rasa is not a be-all, end-all conversational AI platform. It’s a customizable infrastructure layer that provides conversational AI building blocks in a plug-and-play architecture. In this post, we’ll talk about the components needed to build AI assistants and how Rasa fits into your stack.

How Rasa fits into your stack

Every conversational AI application is made up of several distinct layers:

  • The computation layer
  • The conversational AI infrastructure layer
  • The tools/services layer
  • The application layer
Conversational AI components

Let’s say you’re making a pizza. If you’re well-versed in the nuances of pizza making, you’d know that you need raw ingredients for the dough, cheese, sauce, and toppings.

You might have three options:

  • you could make all of the ingredients yourself
  • you could customize pre-prepared ingredients, and put all of the components together to make the pizza
  • or you can buy a pizza with everything pre-built and pre-chosen that you can’t customize
Anatomy of a pizza

The Computation Layer

The computation layer is the foundational layer on top of which other layers sit. Machine learning-friendly programming languages like Python and Julia, and machine learning frameworks like TensorFlow, PyTorch, and spaCy make up the computation layer.

These open source frameworks provide high level APIs that make it easier to build and experiment with models; they come with pre-packaged algorithms that solve common ML problems like machine translation using neural networks, named entity recognition and so forth. Some of them even help with deploying models to production.

Think of programming languages as the raw ingredients, like flour, oil, salt and water, that make up pizza dough. ML frameworks can be thought of as prepackaged dough that you might find at a supermarket. It might be easier to use an open source software library or framework to start building your assistant than to build everything from scratch.

The Computation layer

The Conversational AI Infrastructure Layer

The conversational AI infrastructure layer sits atop the computation layer. It provides a cohesive framework that helps developers build AI assistants. Rasa Open Source is a conversational AI framework for building AI assistants. It includes natural language understanding capabilities that identify intents and entities, machine learning powered dialogue management, connectors that help integrate with popular messaging services, and custom actions that can be invoked to integrate with external systems.

Think of Rasa Open Source as the sauce for your pizza. Making it from scratch is challenging and requires specialized knowledge; but using a pre-packaged version, you can customize and transform the sauce to your liking, and focus your efforts on making a great pizza, not a sauce.

Rasa Open Source is one abstraction layer above a machine learning framework like TensorFlow. Just like how you may not write your own programming language, algorithms or your version of TensorFlow, you don’t have to build your own conversational AI infrastructure components.

The Conversational AI Infrastructure layer

Rasa Open Source abstracts away the complexities involved in building AI assistants. But it is entirely extensible and customizable. You can build any type of assistant using Rasa Open Source. You can plug your own tools into Rasa or build additional layers on top of it.

Since AI assistants can be use-case and industry specific, the ability to cherry pick the various pipeline, policy and configuration options based on the problem and training data can be valuable. For example, you can use a language model like BERT, ConveRT, or plug in your custom model.

Rasa Open Source also uses machine learning policies and dialogue management to handle nonlinear conversations and messy human behavior. Users often interject with off-topic messages, loop back to earlier topics in the conversation, or digress. They typically speak in a non-linear and non-sequential manner. An assistant whose purpose is to automate conversations or handle a certain amount of customer service requests, must be able to handle these types of conversations. State machines or out-of-the-box SaaS chatbot platforms are not best suited for these types of use cases because they cannot scale beyond rules and if/else statements.

To clarify, rules and business logic are still needed in most AI assistants. And you might be able to get away with state machines initially if you haven’t tested it with real users.

A typical state machine flow

But when you start to test it with real users and add more capabilities, it very quickly becomes incredibly messy, unmaintainable, and difficult to scale.

State machine spaghetti code

Customizable frameworks that use machine learning to, for example, predict the next best action and generalize based on conversation history, allow you to create assistants that support flexible and natural multi-turn conversations.

The Tools/Services Layer

The tools or services layer sits on top of the infrastructure layer. With Rasa Open Source, you can build an MVP assistant that can hold conversations, understand context, and execute certain tasks. The next step is to test and improve your assistant. Tools help you do that. Rasa X provides tools to collect, visualize, and review user conversations. It allows you to test with real users, set up CI/CD pipelines, and make continuous improvements. Think of Rasa X or other tooling as the cheese on your pizza. It improves your assistant, like how cheese makes your pizza better!

The Tools/Services layer

The Application Layer

The application layer is where your AI assistant lives. It can live alongside other systems and web applications. For example, an enterprise’s payroll, HR, and IT systems live here. Depending on your AI assistant’s use case, it might interact with one or more of these aforementioned systems. Applications at this layer also have user interfaces that your users will actually see and interact with. You can deploy your assistant to the cloud or host it on-premise based on your security and privacy considerations.

Conversational AI components

Conclusion

Building mission critical conversational AI requires machine learning, conversation data, sound engineering, and powerful tools to design, analyze, and improve conversational AI workflows. Rasa makes powerful conversational AI research and tools available to developers who don’t have the resources to build from scratch, and to organizations that want to focus on solving interesting problems without reinventing the wheel. The ability to customize an open source framework, as opposed to building from scratch or working with a black-box SaaS chatbot platform, has made conversational AI development accessible to not just machine learning researchers, but thousands of developers all over the world.