No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Do You Really Want ChatGPT Talking to Your Customers?

I rarely write about the contact center business, as my practice focuses on the other part of the enterprise communications infrastructure, the various systems our employees use to communicate among themselves and with the outside world. When it comes to contact centers, I’m more in the group collecting depressing stories of wasted hours with a phone pressed to my ear listening to a customer service rep scroll through countless screens looking for any nugget of information to get me off the phone.

Earlier this week, Irwin Lazar posted a piece that reviewed five potential applications for ChatGPT in the collaboration space, and highlighted a number of the themes we had seen at the Enterprise Connect 2023 event in Orlando like generating meeting summaries. That particular application was cited in no less than four of the six keynotes. The thing I like most about Mr. Lazar’s piece is its restraint. Rather than “promising the moon,” he cites applications that are near term with already available initial versions.

I have not noticed such restraint in the contact center. With the networking industry’s long-held fascination with shiny new objects, it is not surprising that contact centers have latched onto this fledgling artificial intelligence (AI) technology with both hands. I have every reason to believe that AI will yield significant benefits in many areas, but at this stage, I can’t see any reason to believe that communicating effectively with human beings is not going to make that list of accomplishments anytime soon.


ChatGPT: A Machine to Produce “Glib BS”

The particular part of AI that the contact center industry has clutched to its breast is generative AI, specifically the generative pre-trained transformer (GPT) family of language models that use a probability distribution of word sequences to predict what word should come next. That’s the basis of how tools like ChatGPT work.

That is where the “training” part of these models comes in. To be able to predict what word should come next in any context, these systems have to ingest enormous amounts of written material. To be clear, this is a “probability computation,” and has little to do with understanding what they’re talking about or the factual content of the words that are getting delivered. ChatGPT is just telling you which word will likely come next! In short, ChatGPT is designed to produce “Glib BS.”

Once you understand what ChatGPT is trying to accomplish, its widely reported aberrant behavior becomes a predictable outcome. My guess is that the training involved reading lots of “Seinfeld” scripts, because “a show about nothing” is going to give you a lot of nothing to talk about!

Having had the misfortune of working alongside countless specialists in the production of Glib BS, I can’t see this as a path to resolving anything.


Does The Contact Center Need “Glib BS”?

The main reason I have spent my telecom career avoiding contact centers is that, while I love a challenge, “tilting at windmills” or romantic crusades to accomplish the impossible is not a game I’m cut out for. I like building stuff that works, and while I don’t know much about generative AI for creating art, music or fixing computer programs, I am an expert at communicating with people.

Companies have a problem with customer service that confounds traditional solutions. Businesses have had enormous success in producing tremendous volumes of great physical products at price points that millions of people can afford. This cornucopia is the result of giant leaps in design and manufacturing technologies, all of which could be easily quantified by energetic MBAs (I’m one of them, so I’m entitled to comment).

However, this great engine of continuous improvement grinds to a halt when it comes to customer service, or simply, directly engaging with human beings. In manufacturing we can bend metal into any shape, integrate multiple functions onto a single chip and work with suppliers to build sophisticated subcomponents to reduce our manufacturing costs. Customer service involves communicating with people, understanding their issues, (despite language and vocabulary challenges) and then determining the best way to assist them.

Now, there is a rather obvious solution to this problem: you offer enough salary that you can hire smart people with above average communication skills, and then spend more money to train them in your products, how people use them, typical problems they encounter, and how to navigate your organization’s systems and resources to make those customers happy. Then we can use all the swell contact center monitoring tools we have to ensure the process is working.

I have run into a few contact centers that have made such and investment, though those are typically businesses that are in the “service business.” However, for most organizations it appears the obvious solution (i.e., pay the agents more money and actually train them) is off the table, so contact center managers embark on the romantic quest to use machines to solve the problem. Those attempts have included ideas like sending customers back to the company website (whose deficiencies were what got you to pick up the phone in the first place), interactive voice response (IVR) system (where most respondents are picking the “Other” option), or worse yet, conversational bots whose primary objective seems to be getting you to say words your mom told you not to say.

It's just a guess, but I don’t think Glib BS is really going to get us too far on improving this.


Maybe We Try Going Half-Way?

Like just about everyone in the tech industry, I love enormous leaps forward; unfortunately, they don’t happen very often, and almost never without multiple missteps along the way. Maybe it’s time to stop trying to do the impossible and start with smaller steps to start moving the needle in the right direction.

For my money, the current generation of conversational bots suck at understanding how humans communicate. However, people very much like things to work. Maybe we should take a slow track on improving the machines and start focusing on improving people’s ability to work with the crummy systems we have. Specifically, we should develop a standard vocabulary for talking to our bots more effectively and start teaching people how to use it!

Businesses have enormous reach with mass media advertising, and we can use this vehicle to show customers how to interact with these systems using as few words as possible. Maybe we’ll start seeing sitcoms with people using the right words to navigate these machines efficiently.

For anyone who might think this is impossible, just think of all of the myriad computer skills we have imparted to the general population who now routinely point-and-click, swipe right, pinch-and-spread to enlarge an image, and so on. Those people who agree with the premise that bots suck would gladly participate in any activity that might give them a chance at getting a problem resolved through the contact center.



As with anytime I take a position at odds with the “accepted industry wisdom,” I fully expect pushback; I’ve already gotten it from my BCStrategies colleagues! My simple response is, WAKE UP! The customer support business, of which the contact center industry is the centerpiece, is a laughingstock. You’re joke fodder for every halfway talented comedian, sitcom writer, and late-night host in the country.

I was around when IT’s focus was replacing people who added up columns of numbers, and we still made mistakes. With customer service, we are talking about replacing much higher-level skills, people skills, the one commodity that never comes cheap. Specifically, people with brains and the ability to engage with other people, unwind their convoluted stories, clearly identify their problems, and have the knowledge to move them on to a satisfactory solution.

Even if ChatGPT can string words together convincingly, we still need to train those bots on our internal product vocabulary and systems; maybe it can read all of our training manuals and find the errors in them. If we are using AI to create music or art, there is no “wrong” answer. In business, we’re required to provide the right answer.

A realistic near-term objective for ChatGPT should be to make it smart enough to offload some of the routine low-level requests from our scarce, expensive (i.e., “trained”) people resources. As Mr. Lazar opines, using this new technology as a helper, not a replacement for, real human skills. Machines with empathy and meaningful communication skills are still out of reach.

We are in the technology business, which is why it’s important for us to maintain a realistic perspective. Flexibility and open-mindedness are still key design principles, and often show better results than adherence to accepted dogma. At the end of the day, we are not responsible for deploying technology, but for delivering capabilities that work. It might be time to refocus by putting the technology in its place and prioritizing getting meaningful results.