No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Conversations in Collaboration: HFS Research’s David Cushman on How Gen AI May Help Unlock Unexpected Value

Welcome to No Jitter’s Conversations in Collaboration series. In this current series we’re asking industry leaders to talk about how AI can boost productivity – and how we define what productivity even is – with the goal of helping those charged with evaluating and/or implementing Gen AI to have a better sense of which technologies will best meet the needs of their organizations and customers.


In this conversation, we spoke with David Cushman, Executive Research Leader at HFS Research. Cushman has a long-term focus on emerging technology, tracking enablers from automation, artificial intelligence (AI), generative AI (Gen AI), data and design thinking, Web3 and metaverse, process orchestration, workflow, and intelligence to quantum computing.

Our conversation with Cushman was sparked by recent HFS Research announcement regarding a series of reports about the impact of generative AI on the U.S. economy. Consistent with the focus of this series, however, we primarily conversed about the impact of Gen AI in the contact center and among knowledge workers more generally – how the technology affects productivity and may help businesses unlock unexpected value.

No Jitter (NJ): Your study found that Gen AI will create “high value” in customer service over the next 12 to 18 months. Can you provide a little more context around that?

David Cushman (DC): I think you've only got a look at the transformation in the quality of the chatbot experience over the last 12 months to realize that it's actually quite useful instead of incredibly frustrating like it [the chatbox experience] used to be. In the call center, we used to take a “next best action kind of approach” where you’d try to think like your best possible handler and have the [chatbot] answers come up in the ways [those agents] would come up with them, because the [chatbot didn’t] learn on the fly and it [didn’t] continue to improve all the way through.

What we're seeing now is not just that the machines can do bits of the jobs for us, but also that it's the relationship between the machine and the human that creates all the value. So having the bot sit alongside you when you're answering questions is a powerful way in which you can improve, and you can improve [the bot]. It's a genuine collaboration between human and machine, which I don't think we've seen before. It has been much more “I will use the machine to do something,” [but] now it's “let's use each other toward shared goals.” I can see that area accelerating.

NJ: Your study also found that “40% of respondents expect Gen AI to deliver more than 10% improvement in overall productivity.” How is productivity defined? Where are those numbers coming from?

DC: Those numbers were taken from people who had been using the technology Gen AI in their businesses in more than several functions for a year. We [collected] that data toward the middle- to end- of last year. I think if you ask that question again and, frankly, we are all the time – another set of data will be out before too long – that number [the productivity improvement] is going to go up, and up, and up.

Because now, just talking from my own experience, I used to go to ChatGPT regularly – take myself out of my workflow, go do something and come back again – now, I have that still running, but I also have my [Microsoft] Copilot license so that every time I open a Word document it's starting to bug me it's like one of those “clips” in the old days, but now [Copilot] is there as an advisor. But again, it's that relationship – we are working with a machine that will improve from our use of it and vice versa. I think this is an interesting distinction.

When we’re talking about productivity, we’re doing so in two ways: not just improving the performance of the human but of the bot, as well, so you get an overall agenda of what productivity means.

Now, having said that, productivity’s at step one on this journey because I think it's where people go to first – because [they] think, “I can save some time here.” When I started looking at [the questions of] what do I do differently? What are the new business models that I do as a result of Gen AI being available to me? -- that's when the net new value starts getting created. That's the exciting stuff.

If I can give you an example – because it's hard to see the future from where we stand – but if I think of Web 2.0 and when it came along and the first part of [its] journey, which I think is [relevant to Gen AI], is this: If I ran a restaurant, the way I would get involved in the Internet was to take a photograph of my menu and stick it on the Internet [as a] static screen of information. That was step one [in Web 2.0]. I think we're kind of there with a lot of what we're doing with Gen AI at the moment.

The next step then becomes making them interactive, so now I can use it to preorder stock [for example]. I can change my business model overnight. And that only happens when people realize the value of the new technology rather than just sticking the old business model with the new technology. It’s like going to the video shop, [then] they’re listing the videos they have on the website – compared to Netflix. That's the difference in model.

I think we will see much more value created in how you work differently, how you deliver new business models as result of the emergence of these generative AI technologies and, frankly, the action-based models that are also following rapidly. So, you’ll have the large language models which you ask to do something for you and then the large action models which will then enable the action to happen.

In the call center example, the customer is asking something, and the integration of the large action model with a large language model could mean that just from the conversation, [an agent] could have a pretty good shot at knowing what the customer would want even before they finish the conversation and start the dispatch process with just a confirmation to follow.

NJ: You said that the 10% improvement in overall productivity was found through the course of 2023. Was that mostly folks using Web-based access to tools like OpenAI's model because the application-based tools, like Microsoft Copilot, weren’t available yet?

DC: In the workplace, yes, but you also have cases where, let's say it was applied to HR – which it has been, widely – and so they would build, effectively, a chatbot to answer all the queries that your HR team [would normally answer], and they would see a very sudden improvement in their productivity.

IBM did this, for example. So now basically any HR requirements you have across the business, you’re talking to a bot as a first resort but of course they can always escalate if things get gnarly.

This means that those HR executives [who were handling inquiries], their role has moved from simply answering questions on emails and phones all day to helping strategically guide how a project should be resourced, or how you should be developing new skill sets for new, emerging needs. It's a completely different and transformational freedom for those people. And if you have your data flow set up right, anyone in the business suddenly has a different relationship with the data in the business and can ask the right questions of the business.

So if you and I can ask the same question that someone on the board can ask, we can make the same smart decisions earlier in the process. And that can very quickly change how effective the business is. And so even with these very early instances of trying [Gen AI], people are realizing it will make significant changes to how productive we can be.

Now, those who have gone further on the journey actually predict 20% or more improvements across [multiple areas]. By that I mean, yes, productivity, but also reduction in cost, increased efficiency, increased market value, even. Of course, if everyone does it, struggle to get that 20% improvement in market value, well, the challenge is who can afford not to take part in. So, I’d say this is going to be something that does change the way business functions. It will reduce cost for them in the main if they manage it properly.

There's a whole other business now in the service provider world around managing LLM operations, so that you can control your costs and make the most effective use of these incredibly powerful technologies and know when to use them and when not.

[As I mentioned] earlier with Web 2.0, a lot of companies tried to say, “no, you shouldn't use the Web during the day. Don't use Google, it'll be a waste of time.” Then people [used it and said]: But it helps me do my job better. Now you’re seeing that exact same process with generative AI.

NJ: Not to be completely pedantic, but how is productivity being defined and do these enterprises have a measure in place beforehand or is it being applied after the fact? And how does productivity differ from efficiency?

DC: We're not talking about productivity in the way economists talk about it. For our survey we spoke to businesspeople, and they answered in a way that they thought was meant by productivity. They saw, in its simplest form, that they got more out for what they were putting in. That’s the simplest way to think about it. It's a bit of a rabbit hunt – ferreting around in productivity too much distracts from what the real value actually is – which is emerging new value propositions, things that you couldn't do before that you can now because of the magic of AI in your hands every day.

NJ: One thing I found a little surprising was that 88% of your respondents said that skills like communication, empathy, curiosity, and emotional intelligence were more important regarding Gen AI. I mean, over my lifetime, there’s been a huge emphasis on STEM, which doesn’t center on those social skills. Can you provide some thoughts around why “invest in humanities” was a key finding?

DC: What we found when we asked the set of leaders what their biggest barriers were to achieve the kind of success they'd like to with Gen AI was that at the bottom of the list was prompt engineering and software engineering. And, really, that was the bottom.

So, the AI is very good at giving you answers – you might have to challenge those answers or shape those answers – but it is very good at giving you outcomes. That whole STEM thing is all about building ways in which you can get to the answer. I think what we are looking at now is which questions should we ask? That's a different mindset. That's the classics, critical thinking, and creative thinking mindset. And it doesn't come so easily to those trained in the engineering mindset, because they see ones and zeros, right? It's yes or no. You need to be able to ask: why is it a one or a zero?

So if you have the machine to pretty much work out all the answers for you, the critical thing is knowing what question to ask. In terms of that relationship between humans and bots, the collaboration that we expect in the generative enterprise, humans are always going to be telling the bots where to head, even if [the bots] get incredibly strategic, which we think will come along – the next generation of AI will be beyond generative and into what we call purposeful AI – it will go off and fulfill missions.

So, who's going to set the missions and who's going to decide why that should be a mission? Those are the questions that become more important to the enterprise. [The machines] can support the achievement of mission, but they cannot do the other bit. [Machines] cannot select the mission.

NJ: You had mentioned the term large action model. That was referenced in the Rabbit r1 demo and elsewhere, too. Can you explain what that term means?
DC:
Rabbit r1 has its own large action model [LAM]. The LLM is about the telling; the LAM is about the doing. This is interesting because it's an area where actions have not been part of the conversation up till now. It's always been hard to talk about actions in IT infrastructure – [but] that could be appearing relatively soon.

The way we're starting to think about that is in terms of how important is an API? How important is software in its traditional formats, when what you actually want to achieve is actions and outcomes. So it's almost a threat to that whole basic level of APIs and software comes with the LAM.

NJ: But how do you control what the LAM does? In the Rabbit r1 demo, the LAM just went ahead and booked a trip when asked. You’d still want to review what it does, right? Like with LLMs, the term “guardrails” comes up as shorthand for making sure the LLM doesn’t break business process or say something wrong.

DC: It's just demonstrating the difference. [With the LAM], it's not a response to the question, it's a series of actions that are triggered. It’s almost like robotic process automation [RPA]. You get into a conversation with X, make Y happen, and then this action is triggered by it. That's a different ballgame than what you're talking about in terms of governance, ethics, and responsible AI.

A lot of what’s slowed down some action in projects has been the “what if” scenarios you've just described, as if it's something to do with ethics. They’re not. The boardroom [must] always maintain control of its ethics. It should never hand that over to an AI or to anyone who's going to build in guardrails or anything else.

Biases and accuracy are different. Biases are all about, well, what are your biases and how to create a way in which they are either reflected or are accounted for and rejected.

My though on bias is that all these models that were trained on the Internet are responding to and gathering exactly what the nastiness of the human state is, unfortunately. There's nothing on the Internet that humans didn't put there.

Now, if you don't like that, then you must manage for that bias. And so I think there's a need for careful consideration, particularly as businesses – but we're also seeing companies offer indemnity. I worry a bit about that because what if my LLM says something so damaging to the brand? I don't think an indemnity for copyright breach is going to cover it.

There are a lot of unknown unknowns in this space. It is moving at an extraordinary pace. Last week, we had two huge new announcements from OpenAI and from Google. One of those effectively means, in my head at least, that we are at the point where we could capture every bit of data about our working day, including video content, and then ask questions: What did we do well, what didn’t we, what was our best decision…. And that sounds great, except that’s a lot of privacy issues right there.

But the scale of the difference between Google Gemini and Gemini 1.5 Pro is [apparent] in the context window. The current one does 32,000 and the new one will do a million on an easy day and has been tested up to 10 million. So, these huge steps in capability are coming thick and fast which is why there's so much fear around it, I think, but also why there's never been a better time to stay agile, stay a little bit undecided and stay ready to change.