I’ve talked to quite a few enterprise IT folks who have a healthy skepticism about generative AI, at least in the near term. I don’t think they question that the technology has the potential to be transformative; they are just also aware of the obstacles, challenges, and costs that could slow or redirect the promised transformations.
That’s why I found this statistic from Irwin Lazar of Metrigy so interesting: Lazar told Search Unified Communications that upcoming Metrigy research will show that “less than 20% of [end user] participants would definitely use generative AI when it is available.” The article quotes Lazar expressing skepticism about companies’ willingness to pay for generative AI-driven features.
In a No Jitter post on the announcement of Microsoft Copilot pricing, Kevin Kieller of enableUC wrote that, “I would happily pay $30 a month if Copilot helped me create articles (like this one) faster or could help me more quickly generate one of the many PowerPoint presentations I deliver every year. Any sales professional who could make just one additional sale, through better note taking, email writing, or faster proposal generation, might be well served by spending $360 for a year of Copilot.”
The key word in Kieller’s assessment is “if.” It’s easy to believe a generative AI system can create presentations or sales proposals. The question is, how much work does it take to get a result out of the system that replaces what the person could have done on their own? And it turns out that, for as much as people are rightfully worried about how large language models (LLMs) are trained for generative AI, we’ll probably need to worry at least as much about how the technology’s users are trained.
That’s the upshot of this article in CIO Dive (which is owned by Enterprise Connect parent company Informa). The article cites a survey that showed many AI end users don’t believe they’ve received the training to use the tools optimally. According to CIO Dive:
More than half of employees say their employer has provided access to AI-powered tools, the survey found. However, those employees had to figure out how to use the tools by themselves without guidance. Only 1 in 5 employees are confident in their ability to write meaningful prompts.
It was during the second wave of ChatGPT hype, around the beginning of this year, when we started to hear about how generative AI was going to create new kinds of jobs, including “prompt engineer.” That sounded intriguing and vaguely promising to those who were worried about generative AI’s impact on the workforce: Just like prior generations of technology, we could hope that generative AI would open up whole new types of work.
But the survey cited in the CIO Dive article would suggest that everybody has to be their own prompt engineer, and that, as yet, this skill is not being widely taught (if in fact there’s anyone really qualified to teach it in the first place). For that matter, has “prompt engineering” itself been defined sufficiently to teach it someone else?)
Enterprises are moving ahead on deploying generative AI assistants, as this additional CIO Dive article notes. The technology is moving fast, and deployments will pick up as well. The question may be how fast users will be willing and able to move to keep up.