In this week’s No Jitter Midroll (NJM), we focus on one aspect of ServiceNow’s Xanadu platform update: AI Agents. “With ServiceNow AI Agents, we are announcing our vision to leverage AI agents that can understand the environment, tap into all available data across the enterprise, and use that data to make decisions and take actions so they can autonomously handle tasks on behalf of people natively and securely,” said ServiceNow’s Dorit Zilbershot, VP, Platform & AI Innovation, in a prebriefing.
That term – AI agents – has surfaced in other contexts from other vendors. For example, Salesforce launched AI Agents for Sales, Ada launched an AI agent, as did Asana and Borderless AI HR Agent (built on Cohere), and Google Gemini implemented AI agents in its foundation model (FM). Most of the other FM providers have introduced “AI agent” capabilities. A few examples include AWS Amazon Bedrock, OpenAI Assistants API and Anthropic’s Claude 3. No Jitter expects more AI Agent news in the coming months.
Agent, AI Agent
AWS defines an “AI agent” as “a software program that can interact with its environment, collect data, and use the data to perform self-determined tasks to meet predetermined goals.” More simply, an agent is any piece of software that acts on your behalf – the calendar invitation and acceptance process is “agentic.” It’s an autonomous process that completes the user’s desired action.
This concept is not new. “If you go back to the 1990s, there was this funky little device called the Magic Cap that had its own [agent-oriented] programming language called Telescript. Its goal was to basically act on your behalf,” said Bradley Shimmin, Chief Analyst, AI & Data Analytics with Omdia. “The demo they gave at the time – and remember, this was the 90s – was to get flowers ordered for your girlfriend, invite her to dinner and set up a table for two at your favorite restaurant. So, ambitious, obviously, on many fronts.”
All of that was an ‘agentic’ process that was programmed in Telescript to perform those tasks. Conceptually, this is like today’s robotic process automation (RPA) which allows developers to automate tasks (usually repeatable, time-intensive ones) via coding.
What is different with today’s AI agents is the use of large language models (LLMs). According to Shimmin, an AI Agent is created when an LLM (or multiple different models) is segmented into different roles. Then, a framework is used – which Shimmin calls ‘agentic computing’ – to enable those different roles (models) to communicate with one another. Shimmin said that the different roles include:
- A Brain: the ‘master control unit’ which takes the request and figures out what it needs to do to fulfill the given task – e.g., reserve a table, order flowers, set a date.
- A Planner: This role uses different prompting techniques (e.g., chain of thought or tree of thought) to build a step-by-step of how to accomplish the request.
- Memory: This role allows the ‘AI agent’ to remember what it is doing. Were the flowers ordered? Was the table reserved?
- Reflection: This allows the AI agent to look back on what it accomplished in the process set up by the Planner. So, the AI agent might say: OK, we reserved a table by the kitchen but not the window. We should obtain the user’s approval for that.
- Tools: To do all the above, the AI agent needs to be able to act. This could be via simple APIs or a more advanced agent framework that incorporates things like math, programming and reasoning.
“To put that all of this together, you need an agent framework,” Shimmin said. Examples of that agent framework include, but are not limited to, MetaGPT, LangChain, Llama Index, Microsoft AutoGen, CrewAI, InterLM Lagent, Haystack Agent, Embedchain, AutoGPT and BabyAGI.
So, this AI Agent approach, enabled by the agent framework, allows multiple LLMs (which are each given the roles cited above) to communicate amongst themselves and to then act toward the goal(s) set by the user. ServiceNow stated that guardrails for robust oversight will be built into its platform to help ensure organizations can add the levels of governance they need – because the AI agents are acting and communicating on behalf of the organization (just as a human employee might).
ServiceNow’s Customer Service AI Agent
In her presentation, Zilbershot demoed some of the capabilities of ServiceNow’s Customer Service Management AI Agent. The following screenshot shows that the AI Agent conducted its own troubleshooting and then summarized to the human what steps it had taken – and then stopped when it required the human agent’s approval to move forward. (That human agent approval is an example of a guardrail built into the system).
“In this case, [the human agent] can see that the [AI agent] conducted a network review, verified network stability, analyzed similar cases, and even contacted the customer asking for details about the routers,” Zilbershot said. She added that everything the AI agent does is grounded in the company’s information and that the AI agents are built on top of the ServiceNow platform using its data workflows and integrations.
The ServiceNow Customer Service Management AI Agents and IT Service Management AI Agents will be available in November 2024 in limited release, with additional use cases added through 2025.
Again, the key thing to remember here is that none of these processes are programmed. It’s not predictive AI nor traditional software development. The processes are carried out by LLMs talking amongst themselves via an agent framework to autonomously complete a task.
“Model to model integration, the use of automation, and alignment of the models with the process itself will drive transformations of the experience an employee or customer has,” commented Stephen Elliot, Group Vice President, I&O, Cloud Operations, and DevOps with IDC. “No one model can do it all. We will be in a multi-model world, just like the multi cloud era most enterprises are managing.”
Omdia’s Shimmin concurred, saying that while a single LLM model could be segmented into Brain, Planner, etc., an organization could instead “use OpenAI as the agent brain, Anthropic as the tool for code generation, IBM Granite for named entity extraction, and Claude Sonnet as the Planner. I can take any of these and ‘wire’ them together in an adaptive and semi-coupled manner such that they can converse back and forth using [an agent] framework to adjust for changes in the process [it’s been assigned].”
Benefits of AI Agents
According to Elliot, the benefits of this approach range from the cost savings of not having to hire new employees, to improving the productivity and efficiencies of existing teams.
“Also, improved customer experiences and various processes will happen,” Elliot commented. “And when an organization adds automation and embeds the model as part of the process, you have a more contextually aware process that is more informed and becomes smarter. And when models start talking to each other, the awareness only accelerates.”
The Downsides of AI Agents
Generative AI models are inherently probabilistic because the models are based on the Transformer architecture. “Any generative AI model that's a Transformer architecture will always have at the end of it, this softmax layer that uses a probability curve to say this word, not that word,” Shimmin said. “Because of that, it’s never going to be the same every time.”
Moreover, the models themselves can get really “chatty,” Shimmin said, meaning that there is a lot of traffic – tokens – being inferenced, output, inferenced again and output again. Shimmin referenced his ‘reserve a table’ example saying that all this traffic occurs because the models are ‘talking’ to each other to figure out if they got the table, ordered the flowers, etc.
“Every time you do that, you're basically making another inferencing call. Those costs can really go up, especially as you start using larger and larger context windows. The memory of your agent framework gets bigger and bigger as the process [executes],” Shimmin said. “The models don't have a memory on their own so they have to have it fed into the context window [via the prompt].”
In addition to the token cost, because the entire agentic process uses LLMs (which are probabilistic) the process itself is harder to debug and determine where and when something went wrong. “If I built Telescope back in the 90s, because it was using traditional software development I would have been able to trace exactly how and where something went off the rails,” Shimmin said. “But with agentic processes that are built on top of large language models, because so much is dependent upon the prompt engineering – the prompt used for each step – trying to figure out where something went wrong will never be easy. But, it can be done.”
IDC’s Elliot agreed, commenting that organizations must consider things like data security, access, risk management, legal and compliance concerns, and model management. “They need to know what happens with the data used to train the model, and how it produces its outcomes through inferencing. Are [those outcomes] accurate? Can they be explained?” Elliot said. “That said, [these] are all manageable risks. The risk of not pursing an AI strategy far outweighs not acting.”
Want to know more?
Some of ServiceNow’s other news includes, but is not limited to:
- Why is it called Xanadu? ServiceNow names its platform updates alphabetically by major cities.
- The Now Assist Skill Kit: Enables organizations to build, test, and deploy custom Gen AI skills which are specific Gen AI capabilities. These skills created by Now Assist Skill Kit can be assigned to ServiceNow AI Agents, which can then action on these skills autonomously in collaboration with human agents. The Now Assist Skill Kit is available in the ServiceNow Store. Some examples of types of skills include:
- Chat and email reply generation for live agents
- Change summarization: Allows IT teams to summarize Change requests and assess related data.
- Copilot for Microsoft 365 integration: ServiceNow intelligence within Copilot for Microsoft 365 is now generally available. It was announced in May 2024.
- The Now Platform RaptorDB Pro: A high‑performance database, which is available now to new and existing customers.
- “Our platform has used a standard relational database model, because we track a lot of relationships doing a lot of reading and writing, and that was a little bit of a performance bottleneck, especially with respect to large data sets and things like that,” said Heath Ramsey, VP Outbound Product Management with ServiceNow. “What we've done is change the structure of the underlying database, making it a column store, which gives us the flexibility to drive better performance and ingest more information.”
- Ramsay said, as well, that because of how ServiceNow architected its database platform, customers will not experience any interruption in service.
- Early use cases demonstrated a 53% improvement in overall transaction times, 27X faster pulling of reports, analytics, and list views, and 3X increase in transactional throughput across workflows.
- RaptorDB Standard is available now to new customers and will be available to existing customers later next year.
- Now Assist AI for new industries, including Telecom, Media and Technology; Financial Services Operations (Banking and Insurance); Public Sector Digital Services; Retail Operations/Retail Service Management.
And, check out No Jitter’s previous coverage of ServiceNow: