No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Generative AI: Not the Droids You Are Looking For

In a world where technology promises to revolutionize every aspect of our lives, generative AI has been heralded as the next big leap. It’s clearly important but isn’t that useful to most of us. What was largely heralded as the most exciting development in human history is turning out to be quite dull. 

Sam Altman said that customer service will likely “have way fewer jobs relatively soon” due to ChatGPT. Bill Gates said that the artificial intelligence (AI) technology behind tools like ChatGPT could be as revolutionary as the graphical user interface was in 1980.

Mark Zuckerberg said, “Generative AI is the key to solving some of the world’s biggest problems, such as climate change, poverty, and disease. It can potentially make the world a better place for everyone.” 

Elon Musk said, “Generative AI is the most powerful tool for creativity ever created. It has the potential to unleash a new era of human innovation.” 

Of course, none of these individuals are impartial. They have all made significant investments into generative AI. But as we approach the two-year anniversary of generative AI, we must question their conclusions or timing. So far, generative AI hasn’t earned those accolades, especially in enterprise communications.

This week, the strategy of relying on Apple to provide a Copilot-free zone ended. The computer company announced its version of Copilot called Apple Intelligence. Like Microsoft, Apple turned to OpenAI to help its customers write emails or messages, review writing, and summarize anything it can. Apple wants to tie the technology to its devices, a decision that has Elon Musk threatening to ban the use of Apple products at his companies due to security concerns. 

The Slowing Pace of AI Innovation

Despite the early excitement, the rate of improvement for AI technologies is hitting a plateau. The big OpenAI launch of GPT-4o showed largely the same natural human speech capabilities we’d seen before. It’s now multimodal (and flirty). Multimodal is new and impressive, but it is exactly what Google faked when it launched Gemini months ago. 

The broad spectrum of applications we dreamed up for AI isn't materializing as expected. Building and running advanced AI systems consume vast resources with diminishing returns.

The only big winner so far is Nvidia, the chipmaker of choice for AI ventures. In 2023, the industry spent a staggering $50 billion on Nvidia chips to train AI models. Demand, in part, is driven by almost every vendor in enterprise communications. Yet, these efforts only brought in $3 billion in revenue. It's like buying a Ferrari to win a soapbox derby—impressive, but ultimately overkill for the task at hand.

The Data Dilemma

A significant roadblock for continued AI development is the data itself. Companies have voraciously consumed vast swaths of the internet to train their AI models. The current creed is the more data the better, but there’s a problem: They’re running out of data. Imagine trying to teach a genius new tricks when they've already read the entire library—the next round involves AI-generated training data. What can possibly go wrong? 

The Revenue Reality Check

OpenAI, a poster child of the supposed AI revolution, doesn't disclose its annual revenue. However, the Financial Times reported that in December, the company’s revenue was at least $2 billion. While the company intends to double its revenue by 2025, its current valuation is around $90 billion. We’ve seen these oversized valuations before, just before a correction occurs. 

Practical Applications: Hit or Miss?

Revolutions take time. About a year ago Microsoft saw an opportunity to use generative AI to disrupt the Google search business. Bing’s share of internet search has not increased at all. Just in case though, Google swiftly responded and also integrated generative AI into search. Instead of giving its customers tried and true popular links, it offered friendly advice to glue cheese on a pizza or eat rocks. These missteps underscore a critical issue: the technology simply cannot yet be trusted to provide reliable assistance.

Every keynote at Enterprise Connect 2024 showcased generative AI, but none suggested it be used directly with customers. The all-powerful technology should only be used with internal use cases, such as Copilot or agent assist solutions. In other words, don’t fire your agents just yet. While generative AI is disrupting other sectors, the best use cases in enterprise communications are summarization (meetings and call center inquiries) and topic modeling. Hardly the revolution we were expecting.

Hallucinations are a Feature

One of the issues holding back generative AI is hallucinations. Most people define hallucinations as the fictitious or incorrect responses that generative AI routinely provides. But hallucinations are really a feature. They’re how generative AI works. Generative AI is playing “Name That Tune” with words instead of notes and completing sentences with probability, not knowledge 

Generative AI differs from other forms of AI. Instead of presenting output tied to specific data, it generates responses from scratch based on probability. Generative AI has no knowledge of truthfulness on most of its content. What it generates might be true, or it could be seemingly accurate but ultimately fabricated responses. Hallucinations are normal, it’s just that most of them are correct. 

Generative AI currently lacks reasonable mechanisms for fact-checking, context analysis and filtering out unrealistic responses. Gemini’s bad advice was because it considered The Onion to be a reliable source of training data. Enterprises have no control over the training data and, therefore, need to assume that the output could be wrong. That’s why it’s not advisable for use with customer-facing use cases. 

A Long Road Ahead

Generative AI is the latest milestone in the long journey of AI development. Other big milestones include IBM’s Watson winning Jeopardy! in 2011 and Deep Blue defeating Kasparov in 1996. Both were groundbreaking at the time. Generative AI is remarkable and magical, but it does not seem to be on a trajectory to disrupt enterprise communications anytime soon. 

Determining the most effective ways to integrate generative AI into practical, everyday business operations will take time. Generative AI is disrupting other sectors, especially software development. The models may help some knowledge workers become more efficient, but very few jobs will be displaced by generative AI.

As for those quotes at the beginning of this post, generative AI has had a far larger impact on software development than customer service, so Altman was clearly wrong. So far, generative AI has not helped improve climate change or poverty, so let’s hope that generative AI saves the world, eventually making Zuck right. Musk gets partial credit for creativity as Amazon restricted authors to publishing only three books daily unless he was referring to sparking human creativity. 

Generative AI is a new tool. And it’s still early. We don’t fully understand this technology or where it will go. It is certainly attracting a lot of investment from very smart people. My point is that, so far, it hasn’t done much, and we need to adjust our expectations that it might indeed be a while until it does. 

It seems that every enterprise comms vendor is touting generative AI as its competitive advantage, as if not realizing there are few barriers to adopting generative AI. My suggestion is that vendors instead focus on the differentiators that make their products unique. Yes, generative AI should be part of the story, but not the whole story. 

Dave Michels is a contributing editor and Analyst at TalkingPointz.

Editor’s Note

Check out these sites for multiple quotes by generative AI experts, including those referenced in this article: Skim AI and AI Disruptor. Check out the full interview of OpenAI’s Sam Altman with Lex Fridman.