No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Is Generative AI Ready to Talk to Your Customers?

During the Locknote session at Enterprise Connect in March, the panelists had a discussion about use cases of generative AI in the contact center - and disagreement ensued. The question was – should organizations use generative AI for customer-facing interactions? While I noted that this is already happening in some situations, Dave Michels was adamant that generative AI should never be used for interacting with customers based on its propensity to hallucinate and make up answers. Dave and I later agreed that generative AI is best left in the back office and for employee-facing tasks, although I’m more optimistic about seeing more customer-facing use cases in the near future.

We’ve all seen the headlines of generative AI gone bad – New York City’s generative AI chatbot told businesses to break the law, Google’s generative AI told someone to use non-toxic glue to keep the cheese from sliding off pizza – the list goes on. I recently had a situation where one of the generative AI platforms presented made-up data and even provided fake links as sources of this data (the AI later apologized when I confronted it). We are increasingly asking, “How can we trust generative AI to interact with an organization’s customers with the possibility that it may provide false or even dangerous information?”

It’s important to limit the use cases to those where generative AI can provide value and cause little or no harm. For example, there is a low risk of harm in using generative AI for call or interaction summarization and wrap up, as the generative AI provides a summary based on a transcript of the interaction. Another “safe” use case is agent assistance, where the AI presents the agent with suggested responses and information based on the organization’s specific knowledge base and training data. In both these cases there is a “human in the loop” to ensure accuracy and reliability.

Another appropriate use case is creating virtual assistants. leverages LLMs and generative AI capabilities to train and enhance intent recognition, enabling enterprises to develop virtual assistants up to 10 times quicker than traditional methods.

The question remains - is generative AI safe for general self-service interactions where a customer is trying to get information or conduct a transaction? The general consensus is – no.

I spoke with several contact center AI vendors recently about using generative AI for customer facing interactions. Most acknowledge that there are shortcomings when it comes to using generative AI for customer conversations and it’s easy for the AI to get off track, requiring some type of guardrails to keep the conversation and interaction from going off-topic. For example, a customer shopping for shoes can’t ask the AI about politics or the weather, and needs to stay within the confines of shoe shopping.

Everyone agreed that the best solution is to use generative AI in conjunction with other AI tools such as conversational AI. For example, Cognigy notes that “Large Language Models are incredibly powerful, but they alone aren’t (yet) optimized for enterprise deployments,” and that a combined solution with Conversational AI helps to provide guardrails and maintain compliance. Cognigy’s AI Copilot brings together conversational AI and generative AI to provide real-time AI support to assist contact center agents, including sentiment analysis, data retrieval, task automation, and call summarization.


The majority of vendors using LLMs and generative AI in customer-facing interactions use it for identifying intent, while conversational AI is used to create the actual responses and dialog. 

  • NLX uses generative AI for capturing information, while conversational AI is used to interact with the customer. In a hypothetical travel use case, the generative AI collects specific information or elements that it needs for the workflow, such as the number of passengers, travel destination, and travel dates. If the customer asks about things not related to the travel inquiry, the generative AI will try to steer the interaction back to the information it needs to capture and won’t deal with anything that is not related to the information it needs to collect.
  • NICE’s Enlighten AI platform relies on purpose-built AI, along with conversational AI and LLMs trained on customer interactions and use case-specific data. For example, NICE Autopilot for customer self-service uses generative AI to understand customer intents and create personalized and human responses, while ensuring that guardrails are in place. Another solution, Autopilot Knowledge, delivers knowledge base information directly to the consumer, using generative AI to converse in a natural, human-like manner. The LLM’s guardrails are defined by the business’ corpus of knowledge to provide reliability.
  • is probably the most aggressive when it comes to customer-facing use cases, noting it has 120 customers with generative AI as the core for customer-focused interactions, with several published case studies. It uses generative AI for responses and to create conversation workflows in real time that adjust to what the customer says or asks. The company notes a 50% improvement in customer journey completion rate and 90% self-serve automation rate. Pelago, a travel experiences booking platform by Singapore Airlines Group uses for generative AI assistance to help customers book the right itineraries and get support when needed. A large logistics customer uses a generative AI customer-facing chatbot to automate customer service. When the generative AI doesn't know an answer or a customer asks an unrelated question, an agent interface connects the customer to a live agent or notifies the user that it can’t provide a response.
  • Cognigy uses LLMs for intent recognition, and to rephrase user inputs to improve and refine intent recognition. The company also uses generative AI to help its AI Agents automatically detect and transition to the customer’s spoken language.


We’re Still in Generative AI’s Internship Years

It’s too soon to say whether generative AI is ready for customer-facing interactions, as we’re in very early days and there aren’t many actual customer examples to turn to. While the technology will certainly improve over the coming months and years, at this point generative AI may be too unstable to use as the primary interface to customers. Without the right guardrails, properly-trained models, etc., there’s a high risk of the AI providing misinformation, which can be damaging to the brand and the customer relationship. If you’re eager to start using AI in your customer-facing tech, the best solution for now is to use a combination of AI technologies to get the benefits of generative AI without the risk.

This post is written on behalf of BCStrategies, an industry resource for enterprises, vendors, system integrators, and anyone interested in the growing business communications arena. A supplier of objective information on business communications, BCStrategies is supported by an alliance of leading communication industry advisors, analysts, and consultants who have worked in the various segments of the dynamic business communications market.