<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=275133924453494&amp;ev=PageView&amp;noscript=1">

Try Solvemate without risks! We are willing to guarantee your satisfaction, or your money back

Learn more

Go beyond the chatbot hype and boost your customer service with automation to improve your CX.

Discover our features to create meaningful conversations in an instant, scalable and cost effective way.

We integrate with the best! Solvemate is built to fit into your existing tools and workflows. Find out more.

Mar 19, 2020 10:21:21 AM

Want to design a world-class customer service chatbot? Not without UX testing!


Chatbots have been all the rage for a few years now, with hundreds of companies - from startups to big enterprises - having a go at developing their own chatbot product.


As it’s a fairly new technology, it’s still “finding itself” - it can definitely be hard to get it right.


So how do you make a chatbot with great UX that people want to use? You test it with the right methods, then you iterate your product, and then you test it some more.


Virtually all successful product development comes down to UX testing; yet, interestingly enough, only 55% of companies currently conduct any UX testing.


At Solvemate, we build customer service chatbots that deliver a superior automated service experience. Hence, the need for a convenient, conversational chatbot interface is integral to our mission. To be successful, we have to fundamentally understand our end-users - their pain points and their experience with our chatbot.



We have to keep asking “why” (at least five times), before we can come up with ideal solutions or best practises.


We want to deliver new features that make sense for our customers and for the end-users, and we want to optimise our chatbot so that it becomes better at problem solving and easier to use.


In order to do this, we constantly collect valuable feedback and generate hypotheses and possible solutions based on that feedback. But how do we know that the concept we came up with is the right way to go? How can we be sure that this is what the users want?


When making decisions - as product managers and UX designers - we cannot simply rely on our own gut feeling, or on our own experiences. We all have biases, blind spots and expectations. Becoming a great, user-centric UX designer comes with a large dose of humility - your intuitive choices and assumptions on user behaviour and preferences are very likely wildly incorrect.


To prove - or disprove - hypotheses about our chatbot, we have to gather end-user data. This is where meticulous UX testing comes to play.



What is UX (testing)?


UX stands for "User Experience", so the entire discipline revolves around the user. Running a continuous quantitative analysis is the baseline. As part of our design routine, we already observe a variety of key metrics to keep track of “how we’re doing”.


These metrics provide invaluable insights to the real-life experiences with our chatbot and any problems the end-users might encounter. In a pinch, we can even implement quick fixes based on these findings. For example, we’ve been able to identify unreliable chatbot behaviour when users have been on slow internet connections (and as a result, unsurprisingly, they’ve dropped off).


All of this quantitative baseline knowledge is useful, but to develop a world-class conversational chatbot interface, simply gathering statistics is not enough. You need to match the methodologies and analysis with the questions you’re asking - and the real problems of your users.



How to approach UX testing for chatbots


In our case, conversations and dialogues are emotional and they make us human. So it only makes sense for us to strive for methodologies that involve emotional feedback, collecting thoughts from users while they are using our product.


At Solvemate, we’ve conducted both remote and in-person testing, depending on what we’ve wanted to build for our chatbot and web app, but in-person testing provides much more depth. Staying faithful to the masters, these methodologies border the scientific, and they involve using “a researcher (called a ‘facilitator’ or a ‘moderator’) who asks a participant to perform tasks, usually using one or more specific user interfaces. While the participant completes each task, the researcher observes the participant’s behavior and listens for feedback.”



Two cases of UX testing for customer service chatbots


The first case was about solution verification. We started to develop a new feature called “True Resolutions” in 2019, and we ran our first in-house, on-site UX test last August with some of our existing customers.


The base premise of the feature is this: we have to be able to reliably verify that the end-user has indeed gotten their problem solved by the bot, and if that’s not the case, there should be an intuitive, easy way for them to flag their request to the service team (either via a phone call, live chat or an email). These insights are also crucial for the companies using Solvemate; they need to know where and when the bot can help the most, and where it can be improved.


We had planned a 4 hour in-person test with a clickable prototype and a predefined user journey - we had very clear key functionalities we wanted to focus on. During the test, we had an ongoing Q&A with the test subjects and collected as much on-the-spot feedback as possible. The main goal of the test was to find out if the new verification metric would be easy and intuitive to understand.


The second case was about an intuitive chatflow. In January 2020, we wanted to test the new chatflow, and the goal was to see how users felt while navigating through the actual user path: clicking options, reading solutions, trying to get their problem solved. We’re so used to chatting and conversational interfaces that we often don’t think about the little intricacies that go into the design: how long is the message delay, how the messages appear and behave on the screen, how the chat elements are colour coded, etc.


Chatbots and their chat flows are so intertwined with each other, it’s almost impossible to say where the chatbot ends and the chatflow begins. Whenever a user interacts with the chatbot, they immediately experience the chatflow. The look and feel of the chatflow is virtually the first touchpoint of the bot, making it almost comparable to the chatbot product itself. And when it comes to automated conversational UIs, they are quite unforgiving of these small details. Because of the way they mimic a real human dialogue, even if a single element feels “off” (for example, the delay in the chatbot’s response), they will feel frustrated and might even abandon the chat.


Again, we had some customers on-site for a one hour workshop, with each customer being paired with a dedicated facilitator. During the test, the facilitator was able to observe and take notes while the candidate was focusing on the scenario and given task.


You cannot build a world-class, user centric product - a customer service chatbot, in our case - without conducting rigorous UX testing and leaving your ego at the door. It's not about delivering a high amount of features, but about ensuring a quality end-product even though it takes more time.


So, what came out of our UX testing and user research?



1. There should always be a simple way to reach the service team


Even though chatbots can do a lot, they cannot, and should not, do everything. There will always be cases where the service team is needed, either because of the complexity or nature of the problem and judgment on “case by case” is needed. Such problems are numerous: potential fraud, custom orders or services, or a situation where, for example, an order has arrived too late for a birthday, or other event.


The chat flow needs to be designed in such a manner that it’s both easy and intuitive for the user to reliably indicate if the chatbot was not able to help them - and what they can do next.


Users also found it helpful to be able to distinguish between higher and lower priority support cases in order to match cases to the appropriate handover channels; is a phone handover necessary for every type of problem, or should it be available only for urgent cases? Would email or live-chat suffice for support cases that are not time-sensitive?



2. Making a choice within the chatbot should be easy


Clicking buttons is fundamental to the feel and overall behaviour of our bot’s chatflow, as it doesn’t involve almost any typing.


However, during the test we received the feedback that displaying the choices as a list of buttons is sometimes overwhelming, and users are forced to scroll further down to see all given options.


As a result of the UX testing, we decided to change the button styling, rearrange them within the widget, and display them closer to each other (instead of listing each option below the previous one). With this solution we covered this pain point and improved the UX.




3. Colour contrasts and transition within UI & conversation flow


Being a Product Designer at a technology company often means that you have a high quality screen and equipment. Counter-intuitively, it makes it harder to make design decisions that benefit the users, as the majority of them operate with much older devices, or with lower screen resolutions.


Based on the learnings from the test, we optimised the colour contrast of the chat elements and cleaned up the flow itself to fit all screen styles and resolutions. Our chatbot should run smoothly on any device.




4. Never leave the users hanging, not even on a slow connection


Slow internet connections are the worst. They are the true test of patience, especially when trying to get a problem solved. For example, imagine trying to use a chatbot to track your latest order - but after clicking “where’s my stuff”, nothing happens.


Most users assume that the chatbot is broken or buggy, i.e. “a bad chatbot”, when in fact it’s the slow internet connection that’s at fault.


Adding a small loading element to chat to indicate “something is happening” had a major impact on the user feedback; we found that users showed more patience and were much less stressed. We also heard that the loading indicator provided a more “human feeling” to the bot, as it provided feedback based on the action of the user.


Design is wonderful - sometimes the smallest of changes can make the biggest difference.



Thorough UX testing is rare a win-win-win situation: it creates happy end-users who get their problems solved faster; happy business customers who will see their CSAT scores soar; and happy software providers who get to see their product makes its users happy.


In the end, every software company claims to be user-centric and customer-centric… and at Solvemate, we say that, too. But the proof has to be in the pudding; unless you put your product through real user-testing, and actually learn from it - you are not truly user-centric. Being a UX designer is not about trying to be a mind-reader - it’s about caring about the users and their problems deeply enough to understand what they really want and need from your product.


Linh P. Nguyen is Senior Product Designer at Solvemate and a bundle of energy. She enlightens the team with her creativity. Having additionally some side projects as a passionate designer, she finds most of her inspirations through her travels where she clearly finds her own center through culture & traditions.