Home » Uncategorized

Chatbot Best Practices – Making Sure Your Bot Plays Well With Users

Summary:  This is the third in our series on chatbots.  In this installment we’ll look at the best practice dos and don’ts as described by a number of successful chatbot developers.

Chatbot Best Practices – Making Sure Your Bot Plays Well With UsersIn our first article we covered the chatbot basics including their brief technological history, uses, basic design choices, and where deep learning comes into play.  The second article focused on the universal NLU front ends for all chatbots and some of the technical definitions and programming particulars necessary to understand how these really function.

In this article, we’ve scoured the internet for advice from successful chatbot developers to provide some useful best practices, or at least some valuable dos and don’ts.

It’s Not about Chatbots.  It’s About the User’s Experience

The user doesn’t care that you’ve got a chatbot.  They don’t care if it’s got AI or can speak to them.  They only care that it solves their needs in the quickest, simplest, most reliable way compared to other options.

The Two Step Rule: In general, if your user has to perform more than two steps on your UI then chatbots become a benefit.  The more UI steps they replace, like those horrible deep menus, the stronger the case for a chatbot.

But Watch Out: If the amount of information they return is too great or too complex then you’ve jumped the shark.  Don’t send a chatbot to do a web page’s job.

Mix Buttons and GUIs with Your Chatbot – Maybe

Surprisingly there’s a fairly vigorous disagreement over whether these two UI types should be mixed.  The agreement is that the best chatbots don’t rely too much on lists and GUIs.  On the whole it will depend on what you’re trying to accomplish.

The main thing is that the user’s interaction with your chatbot not be confusing.  When Facebook released its Messenger platform it even suggested that developers should strip the interaction down as much as possible and cut to the chase by putting the most important features in menus.  It’s a balance between clarity and confusion.

Chatbot Best Practices – Making Sure Your Bot Plays Well With UsersButtons Can Clarify Intent: The first argument in favor has to do with buttons and lists very early in the interaction with the user.  If your bot is designed to do three or four specific things and nothing else then it can indeed be good design to present those options as buttons.  This helps communicate to the user what the limits of the bot are as well as eliminating superfluous text or voice requests for things that aren’t covered.  From a technical perspective it makes the intent and action perfectly clear.

GUIs Can Help Where Choices Are Many:  Suppose you are using a chatbot to reserve a movie or airline seat.  You’ve spoken or texted your way through date, itinerary, flight time, and class and now it’s time to select a seat.  I’d agree with the camp that says a GUI is better here since, for example, you can see how far you are from an exit or if you’re next to the bathroom.

The value of the emergence of chatbots is that humans are fundamentally social and used to conversing in language to express their needs.  This makes chatbots a good fit.  Bots that are excessively driven my menus can seem static and inflexible losing that fundamental edge of conversation.

Persistent Menus – Probably So

If your platform allows menus, then a widely agreed best practice is that there should always be a persistent menu that may take the user back to the web site or app, and specifically that offers an off ramp to a human.

Those First Few Seconds with the User are Important

These first few seconds are so precious that developers have given it a name: “on boarding”.  There’s wide agreement Chatbot Best Practices – Making Sure Your Bot Plays Well With Usersthat there are three things that you should tell your user right away:

  1. Make it clear that they’re talking to a bot.
  2. Make sure the user knows how to exit to a human.  Almost invariably users will try to make your bot do something it wasn’t designed to do.  Chatbots receive a wide variety of requests that just don’t make any sense to the bot regardless of the NLU.
  3. Tell the users specifically what the bot can do.

Dean Withey at Ubisend says they’ve had particularly good luck where the first message forces the user to make a decision and gives this example.

“Hi, I am a customer service chatbot, I can help you manage your account, check your orders and change your booking. If you’d like to do any of this just respond with hi. If you want to speak to a human, reply with the word HUMAN and someone will be right here.”

Your Chatbot Needs a Personality

Here’s the challenge.  Let’s teach a bunch of very talented programmers to have great interpersonal conversations.  We noted earlier that Gartner estimates that by 2020 fully 10% of new IT hires will be writing these scripts.  Copywriting may become the next big IT required skill.

The bot’s personality should reflect your ‘Brand Voice’ in every respect.  The art is finding that balance between friendly but not too casual.  Of course chatbots should talk like a person and NLU will help with that but the details of whether it is ‘greetings’, or ‘hello’, or ‘hi there’, or ‘hey buddy’ are this issues you’ll need to think about.

Of course personality isn’t just word choice.  Beyond that it is defined by the mood you create, the tone and style you set, and of course the user’s direct experience with your bot.  As you consider this challenge it may be useful to think in terms of the classical five-factor personality model:

  1. Agreeableness (friendly/compassionate vs. analytical/detached)
  2. Conscientiousness (efficient/organized vs easy-going/careless)
  3. Neuroticism (sensitive/nervous vs secure/confident)
  4. Extraversion (outgoing/energetic vs solitary/reserved)
  5. Openness (inventive/curious vs consistent/cautious)

Align domain and personality where you can.  Since you are in charge of both the ‘voice’ as well as the visual avatar (if there is one) representing your brand you might want to think about aligning these.  If the domain is a therapy session then the visual and ‘vocal’ alignment of the bot might be to look and sound like a therapist.  If you’re selling pet supplies it might be a playful (but helpful) puppy or kitten.  If you have an ecommerce site where the important information is product availability, sizing, and order status then it might be just as well to have a personality aligning with your customers or even very little unique personality at all.  Alignment isn’t always available but when it is it can improve the experience.

The Thank You Test:  Here’s perhaps the ultimate test.  People know it’s a robot but still feel the need to say ‘thank you’.

Chatbot Best Practices – Making Sure Your Bot Plays Well With UsersAlways Have an Escape

Never let your user become frustrated with your chatbot.  Early on we talked about having persistent menus that always include a get-out-quick ‘talk to human’ command.  Another kind of escape is the ‘fall back’ response.  That is, if your bot has sought clarification of a user’s request more than twice (pick your own multiple here) then give a response like “Sorry I didn’t understand.  Let me get a human on the line for you.”

Avoid User Confusion – Design to Make Actions and Intents Clear

Maybe a better title for this section is ‘test-test-test’.  A little review of chatlogs after you’ve released your chatbot on the world will rapidly reveal where options must be made clearer or where additional branches may be necessary in the dialog.  In the weather bot example our chatbot needs to know the city and the date.  If they’re not entered we may assume ‘today’ and ‘current location’.  Testing will reveal whether this is sufficiently explicit.

Making some information required:  You may want to consider making critical information a mandatory entry like ‘check in date’ for room reservations.

Don’t Let Your Chatbot Learn or Use Bad Language

Remember the feature of NLU where the system continues to learn from your starter “User Says” list.  Well you can turn that off and just rely on hand entered phrases that must be matched, but most of us will leave that on.

Chatbot Best Practices – Making Sure Your Bot Plays Well With UsersNow remember Microsoft’s Tay, the 2016 chatbot that was maliciously taught to spout overtly sexual anti-Semitic Nazi-loving banter in the space of 18 hours by some bad actors.  Now imagine what would happen to your PR if that happened to you.

Many times there are points in a dialog branch where your bot will be instructed to repeat the last words or phrase provided by the user.  You do not want those ever to be offensive words, and yes your users will try to make your chatbot say those things.

While there’s no perfect way to do this, there are features in different platforms that let you define specific offensive words.  For example you can find on GitHub a word filter containing a ‘bad word’ list that extends to a variety of types of offensive speech including the sort of racist, sexist, ableist things we would never say to a user (https://github.com/dariusk/wordfilter).  If you use this particular filter note that it doesn’t screen for scatological words which you may or may not want to add.  It’s not perfect and may actually screen out words you want to allow so be sure to review the documentation for any filter you use.

Some applications also allow a white list of words.  Curiously these may seem at first to be offensive but for example, are used to explicitly allow ‘angry’ words.  Yes your users may occasionally be angry and will need to be able to express that.  A good plan is that an angry response like “OMG shut up and go away” automatically triggers to ‘speak to human’ default action.

Guard Against Unsafe Input

Just as you filtered for unpleasant output you’ll also want a module that prevents unwanted input like foreign code execution (yes your users will try this).  Think of it this way, how do you prevent your users from entering malicious Python code into an input field.

For Twitter bots, this means not DMing or @-messaging other users. For Slack bots, we should limit the permissions allocated to the bot to prevent it from issuing commands.

A good first step is removing potentially dangerous characters like ’@’ or ’#’ that are meaningful on Twitter.

Security experts will tell you correctly that there’s no way to perfectly secure your chatbot but be aware and try.

Privacy, GDPR, and other Legal Stuff

This doesn’t appear on many lists but it probably will soon.  Certainly when the EU’s GDPR regs become enforceable in May 2018 we will all need to think about privacy and opt in issues.  Chatbots are going to have to include clear opt-in terms and conditions and privacy documentation.

Folks who have thought deeply about this suggest there are three approaches:

  1. Passive opt-in:  Present a statement in the first or second message such as “By talking to me, you agree to our terms and conditions and privacy policy” and make it a clickable link.
  2. Effort-based:  Send an entirely separate message with two buttons, one “Yes” or “Agree”, and the other “No”.
  3. Passive use of a persistent menu link:  Add a menu link to terms and conditions and privacy policy.  You’ll probably want to combine with the passive opt in procedure to be safe.

Yes, the more effort we require of users the more likely they are to abandon the conversation.  But at least you’ll know they are even more committed if they proceed.

In Summary:

There is no one definitive list of best practices for chatbots.  These were all drawn from the writings of successful chatbot developers or platform providers and seemed logical to us.  Like many things in data science that are fast evolving you’ll want to keep an eye on this for new developments month by month. 

Other articles in this series:

Beginners Guide to Chatbots

Under the Hood With Chatbots

About the author:  Bill Vorhies is Editorial Director for Data Science Central and has practiced as a data scientist since 2001.  He can be reached at:

[email protected]