No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

The Good, the Bad and What We Can Do with Generative AI

Recently I used the standard search engines and asked: “Exactly how much money did Microsoft invest in OpenAI?” (Open AI is the company behind ChatGPT.) I got back several different answers -- $13 billion, $10 billion, $100 million. But which one is correct? As we all know, standard search engines present information that’s been published. Whether that information is correct or not – or maybe contains errors of fact – is really up to the end user to figure out.

Generative AI can be exactly the same way, getting (possibly) erroneous answers and the end user doesn’t really know where the chatbot got its information from. OpenAI's ChatGPT 3 has over 175 billion machine learning parameters and was trained on 570 gigabytes of text. (Microsoft had access to only 10 million machine learning parameters, thus one of the main reasons for Microsoft’s large investment into OpenAI.) Of course, the other reason could be Microsoft’s big push to get some of Google’s advertising money.

ChatGPT is so new that it is not even in Microsoft Word’s spell check yet, so why all the buzz?

 

The Good

Artificial Intelligence (AI) – and generative AI in particular – have made some incredible advances over the last five years or so. There are now various generative AI programs that can, on command, summarize a YouTube video or an article in three sentences. It can generate new images, works of art, poems and songs.

Generative AI bots can also produce human-like text responses across a wide range of topics. It can give suggestions, and complete complex queries. In fact, generative AI has gotten so good that on Twitter, Thomas Ptacek showed that ChatGPT could construct a passage of text on how to remove a peanut butter sandwich from a VCR, and do so in the voice of the King James Bible.

Here is an excerpt of what ChatGPT came up with: “And it came to pass that a man was troubled by a peanut butter sandwich, for it had been placed within his VCR, and he knew not how to remove it. And he cried out to the Lord, saying “Oh, Lord, how can I remove this sandwich from my VCR, for it is stuck fast and will not budge.

The quote goes on, but you get the idea – it’s a pretty powerful example of what ChatGPT can generate from a creative prompt.

Some of the other good things are the advancement of speech-to-text. At Enterprise Connect 2023, I was able to go into Microsoft Word on my laptop, hit the dictate button, and Word started typing everything the keynote speaker was saying and even corrected itself along the way – and that’s not even ChatGPT listening. Imagine how helpful the Office 365 applications will be once Copilot is bundled in.

Other good uses include noise cancellation. Vendors are rolling out products that eliminate the dog barking in the background or, in the call center environment, filters out the background noise so that the caller can only hear the agents voice that is talking. It’s all pretty incredible stuff.

 

The Bad

With all these powerful capabilities, what is to stop a scammer from cloning my grandson’s voice calling me with an emergency and stating that he needs money? It sounds just like him, so why wouldn’t I respond accordingly? In cases like this it is a good idea to come up with a password that only you and your child know, and then ask the person on the phone, “What is our special password?” and if the caller cannot answer it, then hang up immediately.

What is also to stop generative AI from providing instructions on how to build a bomb to conduct terrorist activities? I have heard that vendors providing generative AI solutions are building in “bug bounties” where these companies will pay developers to come up with ways to circumvent this from happening. For example, OpenAI launched their Bug Bounty program in April 2023. These programs are certainly a step in the right direction, but they have a long way to go yet.

 

What We Can Do

The technology around generative AI has rolled out so quickly that many companies may have jumped on deploying the technology without really thinking through all the intricacies as relates to their business.

For a great customer experience when they call your company, you need to meet customers where they are, in other words, if the customer wants to use voice, offer this as an option, if they want to use chat, offer this as an option.

Case in point, I called my cable provider: The choices I was offered were not anywhere near what my actual problem was, so I entered “0” to get to a live agent, but that was not an option. They did not offer any voice options at all. After several attempts trying to “trick” the system to get a live agent, I finally gave up. So, this live person jumped ship and went to another cable provider after being a customer for over 18 years. It didn’t matter to me. I would rather have had a root canal, then call this company’s 800 number.

Companies are spending millions in technology to provide a great customer experience (CX) without thinking through everything the customer needs when they call. To me, things are backwards, as instead of the Marketing Department or the Legal Department perhaps solely determining how you will script your agent answering the calls, this process should ideally include the agents themselves or at least the call center manager. They know the customers the best, what their needs are, how to respond to them, they know the products, etc.

Finally, if you deploy a generative AI-based chatbot to answer your customers’ questions about your product(s), you had better be in control and not the technology. In other words, have a governance framework that integrates into your knowledge base, delivering the latest information you have while also making sure that your company’s proprietary information does not become part of the Large Language Model (LLM) training model.

Generative AI-based chatbots cannot think, reason, or show emotion. Be careful with these technologies, stay in control of how it answers and, of course, keep your own knowledge base up to date and secure.


Stay tuned for another article on what exactly are:

  • Generative AI
  • Conversational AI
  • Emotional AI
  • Large Language Model
  • OpenAI’s ChatGPT
  • Google Bard