We need to have a chat about generative AI.
ChatGPT has quickly become the most talked-about technology in enterprise communications, or even general information technology. It is clearly a breakthrough technology in many regards, but those that claim its impact is revolutionary are the ones hallucinating.
Though it has a nice ring to it, ChatGPT is not an iPhone moment. For starters, ChatGPT is not a moment at all. Steve Jobs introduced the iPhone to the world on January 9th, 2007. The world had not seen anything like it.
Although you could say ChatGPT-3 was released on November 30, 2022, it wasn’t a moment in the same way. For starters, ChatGPT-3 wasn’t that different than ChatGPT-2, which was incrementally improved over the GPT-1 we met in 2018. ChatGPT-3 was and remains impressive, but it doesn’t work differently than previous AI systems. It’s simply following the standard tech curve that demands subsequent versions to be faster and better.
ChagtGPT-3 is indeed much better than its predecessor. It has more human-like conversational capabilities than prior chatbots, and it’s much simpler to use. Let’s take a moment to discuss what ChatGPT-3 is really good at. Then, I’ll cover how and why its impact will be less than the current hype suggests.
AI is not new and has progressed slower than many expected. AI is a very broad category, but the hype today is centered around generative AI, or AI that generates various types of content such as text, images, and more. Generative AI works by identifying patterns in its training data. In the case of OpenAI and ChatGPT, that training data included the Internet.
Generative AI was a significant breakthrough that can be attributed to Google. The tech giant's groundbreaking invention of transformer architecture in 2017 paved the way for a new era of large language model (LLM) generative AIs. The transformer architecture is a neural network-based system that is capable of processing large amounts of sequential data. It uses attention mechanisms to give different weights to different parts of the input data, allowing its network to focus on the most relevant information.
This technique has proven to be highly effective in language modeling and has since become the cornerstone of most LLM generative AIs. These models can create complex outputs such as chatbots, language translation, and creative writing. The advancements in generative capabilities have revolutionized the possibilities for supplementing or even powering customer service functions.
OpenAI was created in 2018 and immediately released ChatGPT-1. Last November it released ChatGPT-3. According to OpenAI, it took only five days for ChatGPT-3 to reach a million subscribers. That’s impressive and many consider ChatGPT-3 the new AI leader — and it may be. It’s hard to know because OpenAI has been generous, while Google has kept most of its AI from the public.
The fast pace of adoption certainly was groundbreaking. This growth can be partially attributed to its simple, chat-like UI. Both its input and output can be natural language, thus advanced AI became accessible to anyone that can type.
ChatGPT-3 has improved already, so I will drop the 3 and generally refer to Chatgpt-3, 3.5, and 4 as ChatGPT now. The ChatGPT technology is very good at language. It produces articles, essays, scripts, and more with a high degree of sophistication in multiple languages. It can also produce computer code in multiple programming languages.
It can also work iteratively. It accepts feedback and can adjust its output. It effectively supports human-machine hybrid work. We have previously expected AI to do the work for us, but that’s been problematic. For example, when it comes to driving, we tend to think of one driver: human or bot. Bots are very good at driving until they crash. There isn’t a lot of room for an assistant driver.
However, many tasks can benefit from both iteration and assistants, namely writing. Creating prose or code is often an iterative process and ChatGPT can be of assistance (assistants?). This type of AI-augmented, human-driven work is revolutionary in its own. It allows humans to work more quickly. Essentially, using AI as a tool can offer a significant gain in both quality and quantity of output. ChatGPT has opened the eyes of businesses and consumers in just how useful (and natural) AI can be.
I don’t mean to belittle this accomplishment, but we’ve been here before. I remember when Deep Blue beat Kasparov at Chess in 1997. It was a huge victory - that a machine has finally been able to conquer a human in this ancient game of minds. I also remember IBM Watson winning at Jeopardy in 2011. Searching the database was trivial, but understanding the intended answer (or question in this case) was extremely complex. The bot’s power was its ability to both understand the human aspect of the query with its search prowess. Then came DeepMind AlphaGO winning Go games in 2017. Another victory of an ancient game of minds and permutations.
The bottom line is AI is getting better - scary better, but it's not great. Cars still don’t drive on their own without occasionally ramming into emergency vehicles. AI breakthroughs are not new. They are expected, and if anything we were overdue. Also not new, the hype cycles that follow.
After IBM’s Watson won Jeopardy, the company turned its focus on health care. Watson was going to provide doctors insights on symptoms, research, and treatments. It didn’t work, and Watson Health was sold off for parts. There’s plenty of new Hype around ChatGPT.
BT, for example, announced that technologies like automation and AI will replace 10,000 jobs by 2030. That’s a pretty bold and distant prediction. BT hasn’t specified what automation and AI solutions will be offered, procured, and implemented -- presumably because they don’t exist yet. Nor could they specify the cost of said solutions. By 2030 it will be someone else’s problem.
IBM too, a one-time AI leader itself, proclaimed it is implementing a hiring pause for back office roles that could be replaced by AI, representing a 30% back office jobs reduction, or 7800 jobs in five years. Gibberish. Thirty percent reduction of back-office jobs, not total jobs. That’s a paltry reduction of less than 3% over five years in the shadows of the huge double-digit layoffs that Meta, Microsoft, Amazon, and Google effected in 2022. The double irony here is that ChatGPT seems more adept at front-office roles, or opportunities in customer-facing roles.
High-level, vague commitments are about the best we can expect for now because it will take some time for this technology to work its way into enterprise products. The reason ChatGPT remains free to the public is that OpenAI values the ideation in how to apply this technology. Everyone suspects there’s something important happening, but it’s equally important to note that it hasn’t happened yet.
Chatbots are a big part of customer service, and ChatGPT shows that they can be much better. A common question is, "Why don’t organizations just switch to ChatGPT instead of whatever they are using now?" There are several reasons why not, starting with the fact that ChatGPT doesn’t know anything useful. It’s a natural conversationalist that has nothing to say. It doesn’t even know the current weather. The Internet was used to educate or train ChatGPT, but that training data stopped at the end of 2021.
It should be noted that most people do not request customer service for common inquiries they could easily solve with an internet search. Customer service is very specific and action-oriented, and it requires connectivity to back-end systems for ordering, shipping, accounting, and more. ChatGPT isn’t connected to those. Think of ChatGPT as a concept car at a car show. You can’t drive it or even buy it, but it’s the coolest car in the show — to look at.
Of course, that will change. Several companies are already using ChatGPT for its conversational capabilities, but sticking to other AI technologies for back-end services. That’s the right way to do it for now, and that does result in an improved experience, but not revolutionary.
Then there’s that other issue about hallucinations. I love the term. It sounds more magical than being a boldface liar.
ChatGPT is an unbelievable liar. That’s the ticket, ChatGPT is the George Santos of chatbots. Over the years, we have learned to trust machines. Calculators don’t lie, but the most sophisticated AI not only lies but does so regularly and with conviction. It doesn’t do it knowingly. As far as it’s concerned, its’ lying about everything - it’s creating sentences with statistics. This is a problem for enterprises that strive to be honest with their customers.
There were a bunch of predictions earlier this year that ChatGPT will replace reporters. That hasn’t really happened because newspapers need factual, trusted reporting. Writing is important, but most organizations want writing that is tailored to customer needs and contains specific, timely information related to their inquiries. ChatGPT's convincing lies create considerable legal, financial, and reputational risk for any business that uses it.
Despite what I write here, I too am excited about ChatGPT. I’ve been experimenting with it regularly. I think it has some real potential to be an effective collaboration tool. I expect newer versions to improve functionality and usability. ChatGPT is indeed an important milestone in our progress with AI, possibly even more significant than beating a human chess champion. But that’s it - a milestone of ongoing progress.
My attitude toward bots continues to change. In Bots Rule, Agents Drool I wrote how I am now finding bots to be truly useful. This remains true but has little to do with ChatGPT. It was based on personal experience with bots, and while ChatGPT does improve the machine-human interface, it hasn’t improved many. Progress turns out to be more linear than many of us would like.
Dave Michels is contributing editor and Analyst at TalkingPointz.