Summary: This is the first in a series about Chatbots. In this first installment we cover the basics including their brief technological history, uses, basic design choices, and where deep learning comes into play. In subsequent articles we’ll describe in more detail about how they are actually programmed and best practice dos and don’ts.
According to Chatbot.org there are currently 1,331 active chatbots in the world. That’s a lot for a technology that didn’t even exist two or three years ago.
- 20% are in the US, 45% are in Western Europe and the UK, and about 1% are in China.
- They support input in 31 languages. (Note some chatbot development platforms support up to 50 languages.)
- The most popular consumer themes are: finance & legal 10%, and education learning & lookup 6%, among 27 distinct categories including erotic 1%.
- 96% support text recognition, 3% speech recognition, 1% gesture recognition.
I don’t know how long Chatbot.org is going to be able to keep up indexing all chatbots but probably not very long given their explosive growth. Chatbots are rapidly becoming the UI of choice and will be the dominant form of communicating our need for information and services with all types of apps.
What Exactly Is a Chatbot?
Just to be clear at the simplest level a chatbot is a software service that allows users to have a natural language conversation in either text or voice to return either information or an action.
Even the simplest chatbots are capable of multiple conversational steps though some queries may be answered with a single response (will it rain in Boston tomorrow?). Many are designed to have a structured or even unstructured “conversation” with the user seeking additional clarification and drilling down to provide more information or action about the information or service requested (Has my package been shipped? When will it arrive? Please message me when it has been delivered?).
Providing information is only part of their capability. Actions like scheduling an appointment or reserving a flight are also common for the category generally called personal assistants.
In design, the vast majority of chatbots today are powered by rules and structures built in by the programmer. However the direction of development is to have responses created via artificial intelligence. This makes advanced chatbots semi-autonomous and capable of long-form conversations including some very subjective areas like psychological counselling (Andrew Ng recently announced the release of Facebook’s Woebot, a chatbot that gives one-on-one psychological counseling for depression) or recommending how to better manage your money.
A Little History – And a Short History It Is
While some folks will point as far back as chatbot Eliza in the mid-60s or Microsoft’s Clippy from MS Office97 the fact is that our ability to process natural language with commercially acceptable accuracy languished below the 90% range through 2015. It was in that year that our hardware and our development of Recurrent Neural Nets with LSTM finally broke through to 95% to 99% accuracy. It’s no coincidence that this marks the release of Alexa, Cortana, Siri, and Google Assistant to name only the best known.
According to a survey of over 300 companies ranging from small to large performed at the beginning of 2017 by Mindbowser and Chatbots Journal:
- 25% of businesses first heard of chatbots in 2015.
- 60% of businesses first heard of chatbots in 2016.
- 54% of developers first worked on chatbots in 2016.
- 75% intended to build a chatbot in 2017.
Gartner believes we are still on the uphill curve for chatbots. However, every major cloud or social media provider including Amazon, Google, IBM, Facebook, Microsoft, Slack, Twitter, Whatsapp, WeChat, and a host of independents offers easy to use SDKs for developing chatbots. The field is evolving rapidly both in terms of technology and adoption. Customer service, sales/marketing, order processing, social media, payments, and recruitment are agreed to be the main targets with many more to come.
Two Distinct Architectures for Chatbots
There are two distinct architectures of chatbots that exist today.
Rules Chatbots: Over 90% of existing chatbots and most that will be built in the next several years fall in this category of chatbots based on programmed rules. They are relatively simple and fast to build, with decision-tree or waterfall-like logic structures of predefined queries and responses. If well designed they can handle well over 95% of queries and need to have an escape to a human representative when they fail. Most are text in and text out.
AI Chatbots: These chatbots use deep learning engines to formulate responses. They do not have rigidly defined structures and are able to learn from their experience after some initial training. Only chatbots in this category are able to handle complex conversational inputs and provide long-form conversational outputs.
It’s a little misleading to say that only the second type relies on AI since all chatbots rely on Natural Language Understanding (NLU) engines on their front end which have been developed using RNN/LSTM deep learning models.
To be complete, if you want your chatbot to receive or deliver spoken responses we need to add some components. We’ll also show where the data comes from.
If you want your chatbot to respond to voice, you’ll need to give it a ‘wake up’ command. If it’s just text then submitting the text is the wake up.
In voice systems, there is an additional component to the NLU engine that converts the speech to text and back again.
In either the rules based or AI based systems you’ll need to supply the response data from in-house systems or external data sources like weather stations.
You could also use prebuilt predictive analytics just as in streaming systems to make predictions. For example: ‘How many days supply of part X do we have on hand?’, or ‘When will my package arrive’, or ‘Which order shipments are not likely to make the deadline’. Of course, if responses are too long or detailed as in the third sample question, you’re better off building a web page or dashboard. Chatbots are meant to create convenience and speed things up, not take the place of web pages.
In deciding which design to use, try this simple 2 X 2 matrix.
Long or Short Conversations: If you want your bot to be able to be able to handle an extended conversation with lots of variables then the generative AI model must be used. An example might be: “What are all my flight options on (two or three days) between (city x and city y) where I can get business class. And which offers the least expensive business class fare?” Short conversations can be commands “schedule a meeting for xx”, or “I want to return this purchase” (which might require multiple steps but each requires probably only a short look up and reference to pre-established rules).
Open or Closed Domain: The more narrowly you can restrict the knowledge base the chatbot must command the more likely you can use the simple rules based architecture. If the domain is extremely broad then you’ll probably need the generative AI model.
In the next article in this series we’ll discuss in more detail the role of Natural Language Understanding (NLU) and the programmatic building blocks used in both rules-based and AI-based chatbots.
Other articles in this series:
Chatbot Best Practices – Making Sure Your Bot Plays Well With Users
Other articles by Bill Vorhies
About the author: Bill Vorhies is Editorial Director for Data Science Central and has practiced as a data scientist and commercial predictive modeler since 2001. He can be reached at: