No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

How to Deploy Contact Center AI Ethically, Responsibly

Prostock-studio Alamy Stock Photo.jpg

Image: Prostock-studio - Alamy Stock Photo
A long time ago, my first real post-college job was at a call center for a major bank. An automatic call distribution (ACD) assigned calls to a group of about 20 agents who handled credit card, auto loan, and mortgage information. Once a month, each agent would sit down with the manager and receive a ranking. Every month, the same rather dim-witted agent (let’s call her Agent #1) would finish first, while the agent who finished second was clearly smarter (let’s call her Agent #2) than everyone else in the room. But Agent #2, the smart one, always finished second. This was a mystery to us who cared, particularly as Agent #1 received bonuses and awards for her continued place at the top.
 
While the manager always had the ability to listen in on calls, he didn’t do that much, which is why agent #1 got away with her bad behavior for so long. It turned out that the reason she finished strong month after month was that the ACD was measuring duration of calls and overall call volume per agent. As such, when a complicated call came in, agent #1 — who may not have been as dumb as I thought — would disconnect the call, so her numbers would remain strong. That is, she could handle more calls in less time because she hung up on the complicated ones.
 
From a customer service perspective, this was horrible on multiple levels, although primarily because these disconnects meant that the caller with a problem (no one ever calls to say, “my bill looks terrific this month, thanks a lot.”) needed to call back, and based on the nature of ACD distribution metrics, likely was directed to another agent who was forced to deal with an even more irritated customer about whatever the perceived issue was, thus taking more time and emotional energy to resolve.
 
Understanding What Contact Center AI is Measuring
Certainly, contact center metrics are much more sophisticated than they were then, but the bottom line, is it still requires consideration of these critical questions:
 
  • How are metrics determined?
  • How is agent performance graded?
  • And finally, how is success measured?
 
When I discussed these issues recently with Tom Brannen, senior industry analyst for On Convergence, the very first thing that he mentioned was the importance of AI and how it’s defined. For a definition, I turn to Oren Etzioni, CEO of the Allen Institute for AI, who suggested in a recent issue of the MIT Technology Review that AI has two very different meanings. AI “refers both to the fundamental scientific quest to build human intelligence into computers and to the work of modeling massive amounts of data,” Etzioni stated. The second definition is the most suitable for application in the contact center space.
 
The role of AI in the contact center has been — and will continue — to evolve at a consistent pace. That is, the nature of what’s being measured and those measurements are used to generate “useful” information, is constantly evolving. The sophistication of what AI-supported applications can do, starting from a point where it was essentially a “next-step IVR” system, enabling customers with basic questions and/or concerns to raise them without requiring human intervention. The complexity of such applications is continuing to improve. “AI isn’t figuring out what the entity that has deployed it wants. What it is doing is translating that enterprise’s written commands into potential outcomes,” Brannen said.
 
This is why identifying those questions and carefully tweaking them is so critical to successful AI deployment. Are questions and metrics designed to validate already established conclusions, or are they designed to take a hard look at the actual quality of service? In fact, does management, or those reviewing the information generated by the AI processes, really want to know about the quality of service, or is it looking for an easy way to validate items that are difficult to quantify?
 
I've heard that many vendors aren't actually doing AI themselves, but rather, they are relying on another vendor's speech recognition technology and simply providing some analysis once the number crunching is done.
 
Understanding How Data is Processed and Metrics Are Determined
This is yet another reason why a careful and detailed understanding of what data goes into the processes, how it’s weighted and manipulated, and what the ultimate outcomes are so sensitive and important. Additionally, the end user must have the capability of changing the metrics on all levels as circumstances change, most notably end user questions and requirements.
 
Brad Cleveland, speaker, consultant and author of “Contact Center Management on Fast Forward” and “Leading the Customer Experience,” underscores the need to use AI responsibly. “These capabilities are not passive. You have to implement them with a good understanding of what’s right, what’s fair, and what you’re trying to accomplish. Tech aside, how should you treat your customers and employees? Start there and then work back to the design and use of AI.”
 
At the end of the day, AI remains a powerful but soulless tool. How it’s deployed and how effectively it’s deployed remain the key considerations for any end-user contemplating, or currently using, AI processes. With this in mind, these questions remain:
 
  1. How are metrics determined and adjusted based on time and experience?
  2. How quickly can metrics be changed or adjusted as circumstances warrant?
  3. If AI tools are used to evaluate performance, how is performance evaluated? How valid is the input used to generate the output?
  4. Ultimately, how is success for whatever processes the AI is used to accomplish measured?
 
Even though it was a long time ago, I learned some valuable life lessons working at the contact center. First, nice pays — never yell at the agent. The people who were nice frequently got more than they were owed just because it was nice to have someone not scream —or swear — in my ear. Secondly, if management had been listening in on calls with the frequency that it should have, Agent #1 might have been caught long before she was, and Agent #2, (who remains a good friend to this day) would have gotten the kudos that she ultimately received, but well in advance, to the benefit of the employer! The metrics that work today may not work well tomorrow, so those processes and data targets need to change with the times. The absolute bottom line is that AI has no common sense, and if AI processes are going to be used most successfully, it’s not in a vacuum, but with the careful analysis by knowledgeable contact center managers.