No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Decoding Dialogflow: Creating Virtual Agent “Intents”

Quickly understanding why a customer is contacting your organization is a critical element in delighting that person and in providing a great customer experience. The fastest way to do this is using natural language, via either speech or text and regardless of whether a live agent or a virtual agent is responding.
In the chatbot world, designing virtual agents that can understand customer intent from spoken language or text is both art and science. The science is in training the machine learning engine to understand customer words and phrases while the art comes in creating a dialog interaction that is natural, appealing, and reflective of your company’s brand and persona.
In my previous Decoding Dialogflow post, I reviewed several contact center sources through which customer interactions could point to customer intent phrases and entities. Although Google is developing the ability to automate discovery of intents for bot creation, the company indicated in an email to me that the current state of the art for creating intents is “a very manual process; generally some business user who has a strong understanding of current customer experiences will begin by defining a single stream of work; often, they rely on some data based on existing call centers, etc. to tell them which are the most frequent (and therefore most important) queries, then aggregate training data based on that.” The net-net is that designing the intents for your virtual agent will take time and skill. Here are a few guidelines that will help.
Intents or Entities First?
The intent is why the customer is calling. Along with intents, entities contain supporting information within the phrases the customer uses; this supporting information may include names, locations, dates, times, and other objects that make the intent specific.
So, which do you define first -- intents or entities?
Well, it turns out that they will actually be defined and codified simultaneously. As data sources are mined for intents, the entities will come with them. However, in Dialogflow, you will need to define your entities before training your virtual agent with training phrases so that the machine learning engine can properly identify the entities. (I’ll discuss entity creation and “slot” filling in the next article in this series.)
Intent Types
Several different types of intents are necessary for creating a natural interaction with the customer. For each of these types, we summarize some best practices that will make the process of defining, training, and testing a bot easier. More detailed information is available in the Dialogflow agent design documentation.
Welcome and Goodbye Intents
When a customer initiates an interaction, the virtual agent should inform the person that he or she is conversing with an automated agent and inquire about the reason for the outreach. Beginning with this very first “welcome” intent, this and all subsequent intents should reflect an organization’s branding and the persona it wishes to project.
When the customer has successfully completed an interaction, the virtual agent needs to let the person know that the interaction is completed and offer some type of graceful goodbye. This may be something like, “Until next time,” or “Thanks for calling,” or “Is there anything else I can do for you?” These “goodbye” intents need to leave customers with a sense that the organization cares about them and is glad it has been able to meet their needs. How these goodbye intents will be phrased and spoken will depend upon the persona your organization wants to project.
Core Intents
“Core” isn’t a Google term, but I’m using it to describe those intents for which the virtual agent was designed to respond. Several guidelines for Dialogflow intents are as follows:
  • The complexity of your virtual agent application will ultimately determine the number of phrases your intents will require in order to best train the machine learning engine in Dialogflow. More complex bots will require more training phrases. At a minimum, you should craft at least 10 training phrases per intent. These phrases should include variations such as use of synonyms, different ways of saying things, and wording that includes both commands and questions. As an example, a training phrase that contains a reference to time should include different ways to say time parameters -- “7:00 a.m.,” “8:00 p.m.,” or “9 o’clock,” for example. These phrases illustrate different ways of expressing time using a.m., p.m., and o’clock.
  • Training phrases should uniquely identify a single, specific intent. Avoid using similar phrases for different intents, or the machine learning engine won’t get it right.
  • If using a text-based bot, make sure to enable automatic spell correction.
  • At a minimum, include each entity in at least five training phrases. The more entities in an application, the more training phrases required.
  • In Dialogflow, you can adjust the machine learning classification threshold. You can think of this threshold as a probability. Every time a user interacts with the virtual agent, the Dialogflow classification engine gives each intent a probability of being one the user wants. If none of the intent probabilities come in greater than or equal to the classification threshold, then the system can’t determine which intent the user wants, and it must go to a fallback intent (described below). If the classification threshold is set too high, the system won’t match any intents, or it will only match those in which the user utters almost an exact match to a training phrase. Higher classification thresholds require more training phrases.
  • If possible, use negative training phrases to delineate between good and bad queries. For example, if you’re building a hotel room booking bot, a good phrase might be, “I want to book a room.” A negative example would be, “I want to buy a book about rooms.” This negative example can tell Dialogflow that even though this phrase is similar to the good one, it isn’t a match to a core intent.
Fallback/Follow-Up Intents
As mentioned above, when the machine learning engine isn’t able to generate a probability for any intents above the classification threshold, it will default to fallback (or follow-up) intents. This allows the bot application to repair an interaction when the user says something that doesn’t match an intent. A fallback intent can ask the user to repeat something. Examples include phrases like: “Can you say that again?” “Sorry, what color did you say?” or “What item was that again?” Craft these fallback/follow-up intents and their corresponding responses to provide information that will guide the user to make a valid request.
Testing Your Intents
To avoid bias in testing, be sure to have people who didn’t help with the development of your virtual agent and who don’t know the training phrases try to use the bot. This will give you insights into how the bot is performing and how natural the conversation flows.
Make sure you test the bot on all platforms you’re targeting. Also, if you’re using rich messaging in your bot responses (images, speech, cards), test these within the context in which the bot is expected to operate.
In Summary
  • Virtual agents need to get to what the user wants quickly; otherwise the user will want to go to a live agent.
  • Creating intents requires both science and art. The more complex the application, the more phrases required to train Dialogflow’s machine learning engine.
  • In addition to “core” intents, you need to craft welcome, goodbye, and fallback/follow-up intents to make the conversation flow more naturally and to repair a conversation if the user says something the virtual agent can’t understand.
  • Have non-developers test the virtual agent in the environments in which it’s intended to run.
What’s Next
The next article in this series will appear in late August focusing on identifying and creating Dialogflow entities, which are the objects of a conversation.