No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Asking the Tough Questions on AI

The battle of qualitative over quantitative analysis has gone on for decades (if not eons). But as artificial intelligence (AI) has been deployed in new and more systematic, sophisticated ways in business and politics as well as in science, it is critical for those who use this information to consider its vulnerabilities before relying on the results it produces.

As is the case with most decisions, there must be a balancing of quantitative and qualitative factors. What's changed recently is the increasing presence of AI tools, the sophistication of those tools, and the algorithms used to manipulate gargantuan quantities of data. This has never before been available, and AI's role in our lives is growing, whether we invite it in or not. The resulting newly analyzed data may make it easier for decision makers to consider findings generated by complex algorithms that include more raw data and more variables than have ever been available before. AI providers who legitimately tout the power and potential of these new analytics solutions, often fail to disclose how they do the analysis that yields the calculations-based conclusions that they draw, and upon which their customers rely. According to author Cathy O'Neil in her 2016 book "Weapons of Math Destruction" until the proprietary "secret sauce" (how the company analyzing the data does its number crunching and makes its money) is understood, it's impossible to trust the validity of the numbers that are used to make important decisions.

portable

Given these challenges, how can decision makers make not just efficient, but effective, use of the information that's provided without being overwhelmed by it? However a decision maker opts to use AI, it is imperative that qualitative analysis remain a part of any critical decision. This concept isn't new, but remains as true as ever. Consider the following two real-life examples.

Years ago, I worked at a call center for a major credit card provider. This was in the day when call centers were relatively young and data analytics were relatively simple. At the end of each month, agents were ranked strictly by the number of calls that they handled. The agent who, month after month, finished second was probably smarter than all of the other agents put together. But she always finished second. Why? Because when the agent who consistently finished first received a complex call that required more than a minimum amount of time, she would "accidentally" hang up on the caller, hoping that when the caller called back (usually aggravated by then), he'd get a different agent who was willing to take the time to resolve the problem. Eventually, management caught on (the agent who finished first was NOT the brightest bulb), but it was only after months of providing horrible customer service that this was recognized. Here, the analytics, evaluated alone, yielded a bad result and generated an undeserved bonus to an undeserving employee.

The second example comes from a law firm (and it's not unique). Young associates are often given target "billable hours," goals that they need to achieve in order to be considered for promotion and financial reward. Aside from the fact that this model is often contrary to the goal of providing efficient service to clients (that is, the attorney's interests are opposed to the client's with respect to working efficiently), is the attorney who works more billable hours according to the model really worth more than the attorney who gets the job done in less time and thus has fewer billable hours? If you look at the analytics (again, it's a simple but realistic example), the attorney who bills the most hours may be more valuable to the firm in terms of billing. But is that same attorney really worth more to the firm than the attorney who does equal work in less time? I doubt the client would think so, and yet the numbers show otherwise.

If the complexity of a problem is ratcheted up by the inclusion of many other measurable variables (and whether or not something is "measurable" is an entirely different question) in the same equation, the underlying problems remain the same. The numbers in and of themselves do not provide a complete picture regardless of how deep "into the weeds" the calculations go. Simply because there are devices that can crunch huge volumes of data quickly and yield results, those results still may not provide a sufficiently accurate or complete picture upon which important decisions should be based. Further, it's important to consider that unless the decision maker has a good understanding of -- and concurs with -- how different factors are weighted, it is not sensible to rely on the outcome.

portable

Two important considerations should be evaluated before diving into increased reliance on AI:

  1. Bias -- Every code (and coder) has bias. At first pass, this is hard to imagine. But when the statement is considered in greater depth, it makes sense. We all have bias, and while it's easy to assume that because coding is a world of 1s and 0s it's very black and white, nothing could be further from the truth. Unless you know how the calculations are done, and what the biases are (unintentional as they may be) the crunched numbers MUST be viewed skeptically. How was the analysis done? Did it consider, for example, not just the endpoint but the starting point? How much weight was given to the characteristics of the participant? Should that matter? None of these questions have obvious answers, but certainly before making a decision based solely on what the analytics yield, I'd want to know.
  2. Convenience -- Recognizing that we're all busy, having a number or chart that maps "productivity," as defined by the result of complex calculations based on actual defined criteria, sounds like a useful tool for making fact-based decisions quickly. Predictions are often based on previously collected data: The predictions need to be re-evaluated an ongoing basis. The bottom line is that without consideration of qualitative criteria, decisions made based on AI predictions simply may not be sound.

AI is here to stay and will become increasingly present in our lives. But it is not the answer to all things, and those who rely upon it to drive enterprise decision making would be well-advised to do so thoughtfully.

In the second part of this series, I'll highlight important legal considerations before (or, if that ship has already left the dock, before further) AI consideration and deployment.

Related content: