Welcome to the fifth of our Conversations in Collaboration. We began with Cisco's Lorrissa Horton, who talked about the company's approach to data security in AI. No Jitter (NJ) then spoke with Google Cloud’s Behshad Behzadi who spoke about the use of generative AI (Gen AI) at Google, in its Contact Center AI offering and, more broadly, across enterprises. Next, was Amazon Connect’s Pasquale DeMaio who discussed Connect’s history of AI usage and how it’s used in their contact center offering. Then Calabrio’s Joel Martins who talked about current AI implementations in auto-scoring and scheduling and provided a glimpse of how generative AI can help contact center managers.
(Editor's Note: Artificial intelligence has a lot of specific, descriptive terms. Download our handy guide to AI vocabulary here.)
|
For this Conversation, NJ spoke with Frank Fawzi, President and CEO at IntelePeer, a CPaaS provider for enterprises that offers voice, messaging, ready-to-use applications, open APIs, and AI-powered analytics. Companies can use these capabilities to build and integrate communications-enabled workflows without the need for any programming, for customer experience and business processes automation. |
No Jitter (NJ): You integrated ChatGPT into your SmartAgent solution. Could you start with some background on IntelePeer and then move into why that integration is significant?
Frank Fawzi: So, we've been in business for over 20 years, rapidly growing company, venture backed. We provide a communication automation platform to our customers – mostly large enterprise customers – to help them automate their customer interactions across modalities -- voice messaging, chat, email. We measure our success, and our customers’, in three ways.
The first is on the speed of deployment. I usually like to say we can be up and running in full production in less than three months with [for example] a very complex and very high value use cases with integrations to third party systems, etc.
Second, is on automation. If you’re dealing with thousands of customer transactions, for example, it is pretty significant if you can automate 20, 30, 40 [… all the way] up to 90% of those customer interactions. It’s an opportunity to be able to leverage your staff in meaningful ways. And that's both across contact centers, as well as enterprise workers.
The last piece is flexibility. In a time of rapid change, like now with AI, [we built our platform] so we could leverage new components – like large language models (LLMs) and that ChatGPT integration with SmartAgent you mentioned.
SmartAgent allows automation without an AI [at all, and also with] conversational AI-based automation. As of that April announcement, Gen AI-based capabilities are able to be deployed to further enhance those customer automation percentages I mentioned.
So, maybe customers in the past might have been able to use our platform to get high-60s percentage of customer interaction containment and self-service. With the new SmartAgent Gen AI, they [could reach] as high as 90s in customer interaction and call containment.
NJ: And where do you fit in the contact center market?
Fawzi: We serve a different purpose in the ecosystem. We built our solutions around what we call “customer assist” versus “agent assist.” The whole idea is to automate as much as you can, especially for the regular, recurring intents – those [main] categories of cases that customers contacting a business seek. You automate all of that “outside” the context center, and then the agent serves every [remaining] call that cannot be dealt with [in that fashion]. So, if it's a higher value call, or it's a more complex issue, then it goes to the CCaaS solution and the agent.
We are an across-the-enterprise communication, automation and orchestration platform that allows you to extend from the contact center all the way into your knowledge workers and office workers.
NJ: So how do those knowledge workers fit in with IntelePeer?
Fawzi: When you think about the front end to every single customer interaction, part of what you're doing is triaging – a customer calls or emails a business and the business then tries to figure out what that customer’s intention is and who that interaction needs to go to. In many cases, the company may be able to completely bypass the contact center and gather information directly from the frontline worker or the knowledge worker. With our platform, we can send that interaction directly to the designated workgroup, or this needed person or designated office – if that’s what the company wants to do.
We have customers that have hundreds of locations. Maybe it’s retail or something else. When a customer calls in, we first triage it, then we find the right location and then even within that, we [further] triage where the call needs to go and who needs to deal with it or [even if that] piece of information [they want] is available. For example, it could be just simply an opening hour for a location. In that case, we just received that information, give it back [to the customer], and there's no need for it to go all the way to the contact center so that somebody can spend five minutes answering that question when that information is available.
NJ: What is the difference between automation before generative AI and now with generative AI?
Fawzi: There are three stages of automation. I’d describe the base level as almost programmatic – if, then – and it's based on conditions that that can be retrieved from different data systems, a CRM, etc., and then making a decision based on that.
Leveraging conversational AI is the second level of automation, which our customers have been using for the last three years at least. That's more of a guided interview. You’ve probably even used this [type of system]. It guides you through a Q&A process to determine where the call goes or maybe even gives you an asset if whatever you’re calling about is on [the system’s] prescribed list of things it can do.
The more recent one – which [we call our] diamond-level SmartAgent – expands that further by leveraging large language models which allow you to do a lot more automation than the limited guidance – the exchange you can have between a virtual agent and the customer expands significantly to go into more complex intents. And this hits on this whole concept of intent analysis and how you determine what interests you.
NJ: When you're programming the conversational AI – this flow you’re describing – do you have to say, "Look for these words, you understand these words," whereas with Gen AI, the customer just types and then the large language model figures it out. Is that accurate?
Fawzi: Yes, it is. In the case of conversational AI, you would design the Q&A to consider all the possibilities and the different edge cases to create [as best as possible] a highly reliable exchange.
However, generative AI has truly changed what we can do – it’s transformed that guided tour, if you will, to an open-ended exchange whereby our platform can extract intents out of the general purpose language used by any consumer and retrieve what their intent is. And then based on that, our system acts on that to define the appropriate action and appropriate desired outcome for the customer interaction.
NJ: Can the introduction of Gen AI alleviate some of the burden of designing call flows?
Fawzi: With respect to design and Gen AI today – I don't want to say it's simpler, but it's different. It’s more compact because it doesn't involve this whole iterative process of trying to design the perfect conversation exchange.
We have a 10-step process that starts with intent analysis and leads you all the way to production. When followed, those 10 steps lead to a successful implementation. The difference [with Gen AI], I would argue, is that you don’t have to design that [perfect] conversation – you can make it [more] open and you can actually end up with what you need. But you still have to follow the right steps.
NJ: So, if you have an existing contact center in which your product is deployed, do you rip and replace the conversational AI with the Gen AI version or do you implement an A/B testing approach so you can see which one is performing better?
Fawzi: Market AB testing is absolutely what we recommend for customers – we [want] to make sure they're comfortable with the new platform versus the original. So, you start with side by side to get into the details and analytics to show you the outcomes and make sure you're comfortable with the results.
At the end of the day, what's happening is that the transition of that conversational AI into a generative AI tool or platform can be simply done with a drop-down menu that allows you to be able to select a different LLM or a different AI bot framework. We want to make sure [our customers] are comfortable with the outcome for a period of time and then [they] click and implement that switch completely.
But once you've done that, the switch is as simple as selecting the drop down and it’s activated.
NJ: Can you explain what you mean by containment?
Fawzi: In this context, the word containment typically refers to 100% automated self-service – it could be a voice- or message-based customer interaction. Keep in mind that all these modalities – voice, SMS, email – leverage the same kind of AI in the background.
The modality of communication makes a huge difference in terms of thinking about other factors such as latency and the quality of the voice that we use in communicating to make sure it's warm and acceptable to the listener, as well as, of course, being efficient.
NJ: So, what you're doing, is it text based…voice…both?
Fawzi: It's all of the above. Because of the strong history that we have in voice, we typically have complex and very large voice requirements as the core in almost every deployment we have. We will also combine with SMS, messaging, chat, and so on.
We serve customers that could be utilities that have to deal with snowstorms and stormy days and hurricanes and so on. They have to manage and support thousands of simultaneous calls coming into the contact center and automatically serve and respond to them, but without a single missed call. That’s very important in those mission critical situations. We have a lot of customers who value that – utilities, financial services, retail during busy shopping seasons.
NJ: When you’re handling voice, the voice gets transcribed into text, then interpreted and then passed back?
Fawzi: It's a two-stage process…a screening. It’s not a duet, because that would create a lot of latency. It’s doing the speech recognition and then passing it on immediately to the large language model. And then getting the response back, of course, and converting it back into voice, all this must happen within with a very precise mechanism and milliseconds type of reaction times to create the best possible, lowest latency so that it's completely human-like voice bots that the caller can react to positively and not feel like they're dealing with an automated mechanical system.
NJ: So, back to the ChatGPT integration, what does that look like from a deployment perspective?
Fawzi: We're on our own language model today combined with using OpenAI through our relationship with Microsoft Azure. The way you can think about this is when we create a solution for our customers, the key is to create the right approach, the right combination of models because you [don’t] want it to be too broad – maybe [you just want to handle] specific industries like airlines.
NJ: We’ve been reading that the “smaller” models actually perform better in certain tasks than maybe something like OpenAI which is trained on the Internet – is that what you’re finding?
Fawzi: This is exactly what we're seeing. The way we're working on this is for the most part we can get the answers from the smaller model and only go out to the larger Azure/OpenAI as needed -- where the complexity may require that larger language model.
NJ: A customer comes to you and says I'm really interested in Gen AI, but I'm concerned that my data is going to get used to train your model, or my data – and my customers’ data – is going to be comprised somehow. What kind of safeguards do you bring?
Fawzi: We are having those conversations in almost every single instance where we're working with customers on Gen AI. Candidly, some of this could have happened with conversational AI and that really rises to the awareness of some of the customers in some of the verticals who are very worried about PII [personally identifying information] – health care, financial services. We're set up with PCI compliance and HIPAA, of course, as well.
We created this framework several years ago where we went down the conversation AI path around removing PII information, but most importantly, as we deal with generative AI and talk about training the models, every customer's data is only available to that customer. We're not creating separate fine-tuning [instances] of all the different elements that serve a particular customer.
We believe that the value of creating a secure enterprise [offering], that’s robust and reliable is an essential part of differentiating your offer in the market versus others. And you really do have to differentiate between enterprise quality and consumer quality solutions. That's a critical part of the learning we've done over the last several years to make sure that we build proper guardrails and private security around our conversational AI.
We've extended that and carried it into other frameworks, such as the NIST risk framework, and we’re certainly continuing to look at how we can make sure that what we're doing is a meeting our SOC 2 compliance requirements and other compliance regimes we have to be certified through every single year.
We're committed to our customers maintaining their data, integrity and privacy. When you build a communication workflow with our platform, that data and everything else is yours, only yours, and only benefits you.
Want to know more?
Check out these articles and resources: