No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Considerations for AI Product Acquisition

A recent article in the Columbia Law Review entitled AI Systems as State Actors contained this stunning and important quote from authors Kate Crawford and Jason Schultz:

..When challenged, many state governments have disclaimed any knowledge or ability to understand, explain or remedy problems created by AI systems that they have procured from third parties. The general position has been ‘we cannot be responsible for something we don’t understand.’ This means that algorithmic systems are contributing to the process of government decision-making without any mechanisms of accountability or liability.

My first reaction to this quote was horror, but once I got a grip, my initial concerns were only slightly mollified. While the article was published in 2019, it nonetheless raises a valid and timely point: Where does liability fall when reliance on an AI-powered system causes harm? The short answer is “everywhere,” but that’s not really a useful answer.

 

Pay Attention to the Small Print

Having just returned from a highly successful Enterprise Connect where “AI” was the talk of almost every presentation I attended, the sentence from the Columbia Review article highlights one of my great concerns about widespread AI deployment. Many of those entities that use AI-driven products, particularly in contact centers, may not have sufficient understanding of how the AI portion actually works. They also may not have considered potential obstacles that could create not only implementation, but customer experience (another buzz phrase that played often at Enterprise Connect) and legal nightmares. While you don’t need to understand the complex components of a fuel-injected engine to drive a car successfully, you do need to have a solid understanding of which knobs and buttons serve what functions.

I’m concerned that vendors selling sophisticated AI applications don’t want to share their proprietary processes with interested – but not well-informed – enterprise consumers. I’m also concerned that those same enterprise consumers — particularly those that are looking to use AI broadly to manage routine and not-so-routine tasks — won’t have sufficient understanding of the subtleties of the systems being acquired to ask the kind of questions that will help to mitigate liability if (and when) things go wrong. AI deployment is relatively new, but the litigation is just beginning. Given the nature of the beast, enterprise consumers are well-advised to pay attention not only to the small print, but also what’s not included in any acquisition agreement.

When securing AI-driven products and services, industry analyst Tom Brannen commented that “it’s really more about the partnership between the enterprise and the vendor than it is simply a technology decision. For an optimal customer experience, it should be more of a question of ‘with what company do we want to partner and less a question about the technology itself. Who do we trust, given the concerns of reliability, security, and compliance.’”

 

Key Criteria & Questions

Key factors to be considered in any agreement would include contract terms, allocation of risk, privacy protections and data security and products liability issues. Secondary considerations (dependent on the nature of what the enterprise actually does) would include antitrust, knowledge of international regulations regarding not only technology transfers, but data protection (think GDPR and the new EU AI regulation), intellectual property rights and issues associated with communications technologies. How, for example, will the consumer be notified of process updates and revisions? Will the underlying agreement(s) allow for product revision and evolution?

A recent article from Vox has a very telling title: Even the Scientists Who Build AI Can’t Tell You How It Works. Sam Bowman, who runs an AI research lab at NYU explains in the article that ChatGPT, as an example, that artificial neural networks, like the ones used in ChatGPT, do not run on explicitly coded rules that run in sequence. Rather, the AI processes that drive ChatGPT learn to detect and predict patterns over time. As such, these systems essentially teach themselves, making it very difficult to understand precisely how they actually work. According to the Vox article, these processes can lead to unpredictable and even risky scenarios as these programs become more ubiquitous.

In addition, as is the case with all AI systems, the data used is solely based on historical information. So the key questions here include: How long in the past? What is the time frame of the data collected? How old is it? What methods were used to gather it? Are those methods the same now as they were when the data was first collected? Who knows?

 

Know What You’re Buying

To me, it’s imperative that consumers of AI-enabled processes and systems have more than a passing grip on how the process works. You wouldn’t buy a car if you didn’t know how to steer it, and bringing AI processes into the enterprise is really no different. Without this fundamental knowledge of what’s going on, the enterprise could be in for some very unfortunate and costly comeuppance.

I’m NOT saying avoid AI, which would be impossible anyway. What I am saying is that, like the old Syms tagline read, “an educated consumer is [the] best customer.” This couldn’t be more true for the acquisition of AI-enabled products and services.