No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Assessing Gen AI's Impact in the Contact Center, Part 1

Welcome to No Jitter’s Conversations in Collaboration series. In this current series we’re asking industry leaders to talk about how AI can boost productivity – and how we define what productivity even is – with the goal of helping those charged with evaluating and/or implementing gen AI to have a better sense of which technologies will best meet the needs of their organizations and customers.

In this conversation, which is part one of two, we spoke with Christina McAllister, a senior analyst at Forrester who helps customer service and customer experience (CX) leaders transform their strategies and capabilities in the age of the customer. McAllister’s research focuses on the technologies that enable and augment the customer service agent. These include customer service cloud platforms and applications, AI-infused agent workspaces, conversation intelligence, and digital engagement channels. Her research also explores how AI is transforming contact center operations and the agent experience.

In this installment McAllister discusses how contact centers define their performance goals, the difference between productivity and efficiency, and whether the call center really has any use for generative AI.

Forrester

Christina McAllister, Forrester

 

No Jitter (NJ): I know this varies, but how do contact centers typically track performance?

Christina McAllister (CM): The most common agent performance metrics are average handle time (AHT) and [first call] resolution (FCR). It's best if those are side by side because if you crunch down average handle time too far, resolution also goes down with it. Resolution is the best proxy for the customer experience (CX) because customers are obviously not calling into the contact center for fun – they want to get their issues resolved. [For the contact center], handle time keeps that under control.

Some companies lean a little harder on average handle time than maybe they could or should. That is all in service of reducing costs. Every second is expensive, especially if you have onshore staff or in house staff.

But when it comes to the clients that I speak with who are more balanced in their approach, the way they phrase it is that they need to track average handle time [because] they need that information for staffing reasons, forecasting, etc. They need that information, but they don't want to overemphasize it. So, they say [to their agents]: “We want you to take as much time as you need to solve the issue but not a second longer because after that, you're wasting the customer’s time.”

So there's nuance in how much attention any given contact center leader puts on metrics, but those [two] are the most common.

NJ: How would you define agent productivity, efficiency, and performance? Are those terms used synonymously?

CM: Performance is not synonymous with efficiency and productivity. It could be, but it shouldn't be. Performance is more like the “macro” – there's many things you need to balance and weigh. For example, if you're in a regulated industry, were you compliant with all the rules? There are also various parts of a person's performance, some of it is learning and upskilling – a person’s knowledge of things – but those don't have “numbers” as easily associated with them from a productivity and efficiency perspective.

Usually, you see things like handle time, but then also the number of issues resolved. You could call this throughput and it is especially relevant in chat. If the contact center is handling a lot of chat, you'll see measures like concurrency, which looks at how many conversations an agent can handle at the same time and how fast they are getting through them.

They'll also look at measures like occupancy and utilization to understand the amount of time an agent is in their seat allocated to work their shift. How much of that is productive time – that is, time that the agent spends talking to or supporting a customer and not sitting idle waiting for a call or a chat to come through. If that’s happening, then the contact center is overstaffed. But that's not the agent’s fault.

So, productivity is sometimes more about making sure there are enough bodies to support the expected volumes [of calls]. Efficiency is usually more about the individual [contributor]. There isn’t a hard line between those concepts, but that's the easiest way that I would tease those apart.

NJ: Is that where the value of a virtual agent or a chatbot comes into play – handling routine inquiries? And that means that the more complex human required topics are coming through to the human agents?

CM: If we assume that the bot has knowledge of the types of inquiries that people are likely to have then, yes, the result is often that you will contain, deflect or automate – whatever language you choose to use – the routine things.

There's downstream impact though. I see this a lot: if you're measuring on handle time, you might have a goal to get your IVR or chatbot to contain a higher percentage of your customer interactions. But if you do that, you simultaneously have a goal to maintain your average handle time. If you contain the easy quick ones, your average handle time will go up.

So those measurements are at odds with one another. In some cases, companies have set up their KPIs and team goals in a way that does not reflect the reality if they have success here [with IVRs or chatbots], they end up getting hammered [on handle time]. In a way, you're hurt for succeeding in those scenarios.

NJ: So does that mean that cost goes up?

CM: Not really. It is just that if you were relying on handle time as a proxy for efficiency, your previous average doesn't matter anymore because you removed a big chunk of cost from your contact center.

For example, if a contact center succeeds at containing 30% of their contacts with a chatbot or a voice bot, that will have downstream impact. So, you might not need as many bodies to support the amount of incoming calls because there are fewer calls coming in. But, those calls are longer [because those agents are handling more complex issues].

So it is better to look at different measures around agent utilization during their shifts. Attrition is always very high in the contact center, so there may not be an active [effort] to reduce headcount. But as headcount reduces you are rightsizing through natural attrition your headcount situation to fit the reality that you have.

So, make sure you understand which KPIs you are holding as a goal and ask yourself: Is that goal realistic? Once the nature of your [customer] contact mix changes, and [the agents] start handling all complex [issues], your average [handle time] is going to be different.

NJ: Okay. So throw generative AI into the mix with the various use cases around assistance and summarization before, during and after the interaction. What impact are you seeing?

CM: I'll step it back just to talk about the use cases that I tend to see with the caveat that we’re still really early in the adoption phase. I have not seen many big companies necessarily getting attributable value from their deployments – they’re mostly still piloting these solutions.

As you mentioned, summarization is a common use case. The average agent has after-call work. At the end of every call, they spend some time taking notes, selecting the case reason or the disposition code, etc. This could take 30 seconds or it could be upwards of two and a half minutes, depending on the industry and the complexity of the call.

If you have your calls transcribed, and you have generative AI summarizing that transcript, you can also have it intelligently allocate a call reason so if you give the AI some disposition codes, it can pick the right one and call notes get way more accurate.

Not to disparage the agent, but if you have 5,000 people, it’s hard to get all of them to do it the same way every time. [With AI], the notes on every call are more accurate and are analyzable, which was really never the case before.

If you had, for example, two minutes of after-call work and [deploy gen AI to do that work], then you no longer have that gap so [agent] utilization gets better and throughput [improves]. If your agents could handle 50 calls in a day and with [gen AI] now you can handle 53 – and if you do that across 500 agents, it's meaningful.

There is a risk of potentially burning your agents out, though, if that was the only breather they had between calls, and now it’s now gone. I would say that there needs to be consideration on the impact to the [agent] experience and if it's going to lead to burnout in that way. You need to be careful.

So that's [one] way [gen AI] is measured. It is literally a reduction in the overall handle time of that after-call work. It just goes away because you don't need to do that anymore. [And it’s an] easy measure as far as building that business case.

NJ: What are your thoughts on the in-call assistance that AI, or gen AI specifically, can provide to agents?

CM: If you’re referring to suggested responses, I'll separate that from a different kind of assistance that you mostly see with chat agents.

When the AI is providing real-time guidance or content suggestions – scripting types of suggestions – I would say this is one of the areas where I am not confident that generative AI is required or even the right choice. Many vendors in the market are eager to place generative AI into this use case, but I think the broader market is still a bit in flux on how they’re going to price this kind of assistance.

In the old way, the real time guidance wasn’t generative. You would build out concrete examples of scripts or steps or whatever you want them to do which, in many cases, wouldn’t change. So, you’re not incrementally paying every time that [guidance] is sent to an agent. Whereas every time you hit the generative model, you pay for that. The scenario I need to see from vendors is an incremental evolution of what I'm talking about here.

For example, say I have a roster of new agents. They are brand new; they don't know anything. [Some of what they do] will be very similar every single time and all of them will need to receive guidance on those things. But that’s all very repeatable. Why do we need to generate it every time?

What I'm hoping to see from vendors is the ability to generate [guidance] a couple of times but then, now that we know that [the guidance] is consistent, let's anchor it down so that it is not being generated every single time. Otherwise, your costs remain pointing up and to the right. That won’t work for the long-term ROI of these solutions.

Want to know more?

In “How Generative AI Will Improve ROI in the Contact Center in 2024,” IntelePeer’s Frank Fawzi provides his perspective on how generative AI will improve ROI in the contact center. This article talks about how generative AI can help reinvent CX, and this article discusses how gen AI might be used to replace agents. This article covers a Boston Consulting Group and Microsoft study that substantiates McAllister’s point regarding the reduction (or outright elimination) of an agent’s after-call work.