Artificial intelligence (AI) is changing everything about the customer experience. It doesn’t take long into a contact center or workforce engagement management (WEM) conversation for the topic of AI to come up. It’s changing the way we approach and manage customer engagements.
The use cases are broad and varied and include intelligent virtual agents (IVAs), chatbots, augmented agent services, quality metrics, analytics, noise suppression, and more. There are few if any aspects of the modern contact center that are not being re-imagined with AI.
AI is a branch of technology with widespread applications, not a specific product. The various use cases often involve totally different AI technologies and vendors. For example, noise suppression technology and chatbots have little in common with one another.
Conversational AI is likely the largest AI technology impacting the contact center. It is used to power chatbots, sentiment scoring, agent coaching, and more.
It wasn’t long ago that conversational AI existed only in science fiction. "Star Trek" provides a wonderful example of a speech UI. Think of how the Star Trek captains naturally speak to “computer” and getting the useful responses they expected. “Computer” never just offers answers due to a false trigger, and it always accurately understands the intent and request.
We haven’t quite caught up to that UI model, but we are getting closer. The rate of improvements in conversational AI is staggering. One aspect of conversational AI that we have conquered is giving our bots a voice. The Trek “computer” sounds pleasant, yet it’s clearly not a human. It is a natural-sounding machine voice — whatever that is.
Making computers sound human can be controversial. In 2018, Google did an infamous demonstration of its Duplex technology that had a personal assistant-bot call local businesses to make appointments or reservations. The demo was impressive, but what stood out was its human-like voice that fooled real humans. The bot said “um” and other human sounds we don’t expect bots to make. The response (or outrage) was surprising. People were upset about being fooled, and Google agreed to make Duplex more machine-like.
Should bots be botish?
A lot has changed since 2018, and bots are far more prevalent today. No one really likes talking to machines, so maybe we should revisit how bots should sound. We’ve learned to yell “operator” or “representative” to get to the humans, but that’s more about problem resolution. As the bots become more useful, maybe they should sound more helpful, and that means more human.
Several CCaaS vendors are talking about the importance of empathy. For example, augmented agent and agent coaching platforms provides suggestions to agents when in a conversation to convey empathy. Unfortunately, empathy is a part of the AI puzzle we haven’t licked yet, but we can create a more human interaction by simply adjusting the gender, tone, and sass in the sound of our bots.
This is really the power of voice, it conveys personality and more far better than text. For example, in the new “Top Gun: Maverick” movie, Val Kilmer reprises his role as Ice. A combination of a tracheostomy and chemotherapy have left Val Kilmer (and his character) unable to speak. In the film, he does speak via a synthesized voice made by Sonatic. It ingested hours of his recordings to produce a Val Kilmer voice model. Type the lines in the software, and Val (or Ice) speaks. Five9 offers a similar service called Virtual Voiceover.
Sonatic could have reproduced his natural recognizable voice we heard in other movies, but the film’s creators wanted to “cast” a voice that better fit the older, weaker character. The movie tricked us to believe we were hearing a human, but it was a synthetic voice — and there weren’t any repercussions.
Giving bots personality could backfire. My mind goes to Marvin the Robot in “The Hitchhiker’s Guide to the Galaxy.” Marvin was built as a failed prototype of Genuine People Personalities. The idea was to give robots real, human personalities. Consequently, Marvin was afflicted with severe depression and boredom. Ask Marvin to escort someone to the bridge, and he’ll respond, “Here I am, brain the size of a planet, and they ask me to take you down to the bridge. Call that job satisfaction? ‘Cos I don’t.”
Personally, I like the idea of bots sounding more human, but there are clearly a lot of warnings from science fiction. In the film Her, an introverted character bonded with his “AI-OS.” In the film “Ex Machina,” an android dupes our human hero into setting it/her free.
These ideas are not far from reality. Blake Lemoine was recently terminated by Google after he published transcripts of conversations with the LaMDA chatbot. Lemoine, an AI engineer, believed LaMDA became sentient. Google and many other engineers refute his conclusion. What evidently fooled Lemoine was the bot’s fear of being turned off.
The author and futurist
David Brin has written about the “robot empathy crisis.” He predicted people will conclude AI to be sentient because machines will provoke our human notions of sympathy. So, there are limits in how far we may want to take these personalities, but adding a little more human into the bot equation could be a good idea.
It’s already common for bots to allow us to select their voice from a menu. Why not take it a bit further and allow us to select more human characteristics? I’m sure it’s just a matter of time until they become mainstream.