Mar 29, 2021 11:13:07 AM
How Dynamic Conversational Chatbots Deliver Where Static Flows Fail
We’ve all had that frustrating experience where clicking through the chatbot flow always led to the wrong answer, sometimes even the same one over and over again.
We might come to the conclusion that chatbots are just stupid and really don’t get us. The reason why most button-based chatbots fail to deliver meaningful conversations is because of their conversation flow: Instead of applying an adaptive and dynamic flow based on an algorithm, they use static decision trees which limit their capabilities to solve user problems.
The Agony of Choice: Click-Chatbots
Chatbots based on decision trees (be it static or dynamic), using buttons to guide the user to the solution, have one big advantage: They’re convenient to use. Clicking a button is faster than typing and people are already used to it, thanks to picking up their smartphones multiple times per day and interacting with it based on touches (or clicks). The chatbot asks the user questions to identify their intent. After answering a few questions, the end-user is shown a solution. It is a fast and convenient experience.
The chat flow can either be rule-based or dynamically generated. Rule-based means there are hard-coded rules built into the bot resulting in a fixed flow of the chat. This approach offers less flexibility than the dynamic model, which is more adjustable based on a combination of training data and mathematical calculations. Let’s take a deeper look at both conversation flows.
Conversation Flows with Static Decision Trees (SDT)
When the chatbot interface relies on asking multiple-choice questions to interact with the user, one possibility to apply them is with a rule-based, static decision tree. This means the bot asks a series of predetermined questions, fixed to the conversation branch. This approach unfortunately means the interface is often long and tedious: As the chat flow is fixed, the user has to go through every option - even the ones that do not apply to them. (A classic example, while not text-based, are the decision trees used when calling service hotlines: “Press one if X; Press 2 if Y”). Buttons can be a good guiding system when the users do not know what they want as it reduces the cognitive load, but it can also become restrictive quite quickly, leaving them frustrated.
A clear advantage of this approach on the company side is that it is super easy and fast to build bots with it. It’s all about mapping out the chatbot flow based on the most frequently asked questions and connecting it to the right solutions. And there’s no limitation on what kind of chatbot this could be: From a bot answering questions about loan financing to a personal one to help you decide which dress to wear to go out tonight.
However, the bigger the solution space of the bot, the more complex it gets and it becomes nearly impossible to maintain it. There are limits to what the human brain can digest and the complexity of a static decision tree chatbot can be overwhelming. This means the bot can only be as good as the human being which builds it. Its quality is directly proportional to the mental capacity of the bot trainer. And when this dedicated person leaves the company, the “knowledge” of the bot leaves, too. Overall, chatbots based on static decision trees are not a sustainable solution for companies and not a satisfying one for its customers.
Conversation Flows with Dynamic Decision Trees (DDT)
The other possibility for button-based chatbots are dynamic decision trees. These work also with predefined questions and solutions, but in this case an algorithm dynamically decides which question to ask and when to solve given the current knowledge of the system.
How the the dynamic conversation flow works
Deploying a chatbot with dynamic decision trees gives an airtight logical framework which holds context and knows what questions to ask the user in order to get to the right solution. This approach offers still fast and convenient solutions to the user, but guides them more precisely with every click towards the most likely answer. After each click a new, dynamic decision is made by the algorithm to present the next best question.
In addition to that, there are context questions that describe users, e.g. based on their customer profile and other information like their order history. Those questions are automatically answered by the chatbot in the background and taken into account in the conversation. This helps to make the chatbot flow more relevant and personal and to answer user requests only with solutions which relate to them. If the bot still can’t handle the question, a seamless handover, e.g. via live chat, should be made possible at every stage of the conversation.
The freedom of not having a fixed flow doesn’t only bring benefits to the users, but also the company and the team deploying the chatbot. The initial bot content is easy to set up as it’s a best practice to start with the top 20 requests. Further training occurs when a new solution is added to the dataset and the algorithm makes a prediction which question to answer. Based on solution clustering, past conversations are grouped to minimize the training work.
Do you want to learn more about chatbots?
Button-based chatbots are not the only species out there. Another relative are NLP chatbots which have both pros and cons compared to multiple-choice chatbots. If you want to go even deeper, read how chatbots are getting smarter with NLP and how the Solvemate Contextual Conversation Engine™️ combines DDT and NLP to enable meaningful conversations.
Karen takes care of Solvemate's content universe as Marketing Communications Manager. When not writing about chatbots, you will find her watching Danish tv series (Dear Netflix, please talk to DR and add some new ones!), doing (aerial) yoga or trying out every recipe from Yotam Ottolenghi.