No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Decoding Dialogflow: What Makes a Good Bot

In part one of this series, we discussed conversational AI and described the principal capabilities Google Contact Center AI (CCAI) provides: Virtual Agent, Agent Assist, and Conversational Topic Modeler. We also illustrated how Dialogflow, Google’s natural language understanding engine within CCAI, works. In this article we focus on some of the characteristics that make for a good intelligent virtual agent or bot project.
The tremendous interest in conversational AI is a global phenomenon. Google reports striking year-over-year growth in the number of developers who are using Dialogflow to create intelligent agents, with the number of account users surpassing 850,000 as of April.
Google CCAI developer growth chart

Growth over time in the number of developers using Google's Dialogflow platform to create intelligent agents (source: “The Next Conversation: Powering Customer Conversations with AI” at Google Cloud Next ’19)

With the growing importance in intelligent agents and conversational interfaces, it might be illuminating to consider, “What makes a good bot project?” And, perhaps just as enlightening, “What causes bots to fail?”
Characteristics of Good and Bad Bots
In an email exchange around what constitutes a good bot, Google suggested that “any use case where you interact with customers regularly to address questions or other needs” could be a candidate for an intelligent bot project. Within the scope of the contact center, this will include classic customer support services in which a significant percentage of the call volume corresponds to a recurring set of customer questions or situations. Having live agents respond over and over to the same frequently asked questions (FAQs) is one example of an activity that could potentially be automated to free up valuable live agent time.
Other examples include process-oriented support such as changing a password, transferring money, or placing an order for food items or tickets. Bots are even beginning to augment existing processes: In an unusual yin-yang contradictory twist, Google reports that it has successfully created bots that teach live agents how to better interact with customers.
What Makes a Good Bot
So, if intelligent bots are going to be so important in our future, what are some of the characteristics that go into designing a good bot? Here are some specifics:
1. A bot needs to do something useful for the customer and do it in a way that is easier than other alternatives. It should save time and reduce the effort a customer needs to expend to resolve her or his needs while interacting with an organization’s products or services. It then needs to get out of the way, meaning that the bot should be efficient, succinct, and fast.
2. A bot needs to communicate clearly to the customer what it can do. Just telling a customer he or she is conversing with a bot isn’t enough. The bot needs to state, in a succinct way, what its capabilities are.
3. A bot needs to reflect an organization’s brand and image. It should behave and act in support of the company’s overall corporate persona. The way messages are crafted, the words used, the tone of voice, the cadence, wit, and sass need to be designed so that the people using the bot are continuously exposed to the brand’s values during their engagements with the bot.
4. A bot should understand the user’s intent and the context.
  • Understanding intent involves a technical term called “conversation implicature.” The understanding we get from words is rarely literal or superficial; consequently, the bot needs to understand what we mean, not necessarily what we say. An example would be, “Pass the salt.” What we really mean in this phase is that someone wants to add salt to their meal as opposed to wanting to catch salt that is thrown or tossed to them.
  • In Dialogflow, “context” has a specific meaning: It’s the relevant information surrounding a user’s request that was not directly referenced in the person’s words. In a weather bot, for example, when you say, “How’s the weather?” the context around this request involves both location and time. The bot needs to know these two things to give the person a response. Hence, if not spoken directly, the bot can make an assumption about context such that the real intent is that the person wants today’s weather for the current location.
  • A context adjunct is that when possible the bot should be able to understand a customer’s journey. This implies that the bot may receive additional contextual information from backend systems. For example, it could know that the customer recently made a purchase with the company, or it could know that a customer is looking at a particular website page. Adding context data from the customer’s journey can then inform the bot and provide a better starting point for an interaction.
5. A bot needs to be designed using the “cooperative principle.” This principle relates to how people engage in effective conversation; it comprises four maxims:
  • Maxim of Quantity: The bot needs to provide as much information as necessary to advance the conversation, but no more information than is required.
  • Maxim of Quality: The bot needs to contribute true information and not bog the user down with things that may be false or for which the bot lacks evidence.
Example of maxim of quality

An example of the maxim of quality: The bot is moving the conversation forward by offering reasonable suggestions.

  • Maxim of Relevance: The bot’s response must be relevant within the context of the conversation.
  • Maxim of Manner: The bot must deliver clear responses that are easily understood.
6. A bot should be developed iteratively. The initial rollout should start simple, with frequent revisions until the bot is performing at a satisfactory level. Then drill deeper and add more features/capabilities. Good bots have performance metrics that a company can monitor continually, and a good bot is continually updated and maintained.
7. In human conversations, there are errors. A well-crafted bot knows how to keep the conversation going gracefully when a person makes a mistake in the conversation. Users may make three types of conversational errors when interacting with a bot:
  • No Input – the person hasn’t responded, or the bot hasn’t heard the user’s response.
  • No Match – the bot can’t understand or interpret the person’s response in context.
Sample of how bot responds when not finding a match

In this "no match" example, the bot has gracefully responded with a suitable, but different, prompt when the user didn't  reply with how many tickets were needed. Each time a user can interact with the bot, error scenarios need to be considered in good bot design.

  • System Error – the bot is asked to do something that it isn’t capable of doing.
8. A bot should be designed for multimodal access, meaning it should work across device types. The bot should take advantage of the conversational and/or the visual capabilities of the person’s device. For example, if the bot detects a person who it’s interacting with can see images or click on options, it should use these capabilities in the interaction if doing so would help move the conversation forward.
Different kinds of conversational and visual interactions are listed in the table below (source: Google).
Table of different types of bot responses
Example of bot responses

As this Google image shows, bots don't have to be only text-based or boring. This sample visual response from an intelligent bot shows how the bot can use prompts, images, and chips.

Click below to continue to Page 2: Bad Bot, Bad Bot
  • 1