No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Managing AI Tools Requires More Data Transparency

One of the major themes at Enterprise Connect 2023 was artificial intelligence (AI). From keynote presentations to booth demos, AI demonstrated the ability to improve experiences and make life easier. It’s embedded in many products today, and undoubtably, in more to come.

One compelling use case for AI is assisting agents in a contact center. By listening to a conversation, the AI can surface documents needed or suggest next steps to the agent. It can even analyze the emotions in the voices and suggest responses for the agent.

In order to do this, the AI must be trained with very large data sets. Some AI models are trained by information available on the internet, Reddit, or call records. This training decision is not made by the organization who deploys the AI tool.

When deploying products and services with AI embedded into them, organizations must be sure that they can avoid unintended consequences from biased algorithms, faulty training, or incomplete parameters.

By confining the data that the AI can access, organizations can avoid problems like the stories we’ve all heard of ChapGPT responses that included false or fabricated information.

But how does an organization truly manage results from an AI solution? Who monitors the AI and makes corrections and adjustments?

Snapchat recently launched an AI that is intended to be safe for use by teenagers by building in guardrails. However, it still produced some wildly inappropriate results regarding drinking, drugs and sex.

As long ago as 2017, it was apparent that machines have knowledge that we’ll never understand. As a Wired magazine piece put it, “What we know depends upon the output of machines the functioning of which we cannot follow, explain, or understand.”

“We are increasingly relying on machines that derive conclusions from models that they themselves have created, models that are often beyond human comprehension, models that “think” about the world differently than we do,” according to technologist David Weinberger.

Newer large language models (LLMs) such as ChatGPT and others are displaying an unpredictable phenomenon. They have capabilities and knowledge that was never programmed into them, such as multi-step reasoning. “Researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all.”

AI models have become “black boxes” which perform analytical functions that their human creators cannot explain or even comprehend.

How can an organization manage such a thing? Or troubleshoot problems? In order to fix a problem, you must understand why and how it occurred. This is impossible with a “black box”.

The promise of AI is immense. However, when deploying products and services with AI embedded into them, organizations must be sure that they can control the results and the experience.