No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Ethical Considerations When Deploying AI Processes

Anna Berkut Alamy Stock Photo.jpg

Image: Anna Berkut - Alamy Stock Photo
One publication that makes its way almost daily into my inbox frequently uses the phrase “AI Ethics.” While my objection to this phrase may seem overly nitpicky, the fact is that while having no common sense, Artificial Intelligence (AI) in and of itself also has no ethics. The real issue is what I would label ethical use of AI applications and systems, a title that describes a completely different, real, useful, and even quantifiable tool. Reliance on AI to assist — or even drive — decision-making is becoming increasingly prevalent in decision-making involving analyzing of all but the smallest amounts of data.
 
Another pet peeve of mine is the claim that a system — particularly one that references AI underpinnings — can be without bias. (Not the case. Ever.) We all have biases (some are more easily recognizable than others, but that’s another issue), and it should never be a claim of anyone or any entity that bias has been eliminated. We have biases based on the things we’ve learned, both consciously and un, including the way we’ve learned or experiences we’ve had or seen in others. Again, eliminating bias isn’t possible. What’s possible, however, is recognizing and managing it. Most biases can — and must — be identified so that those who are using these potentially powerful (read: life-changing) tools can use them in the most beneficial way and minimize the risk when bad information — or even good information — is used improperly and bad decisions and outcomes are the results.
 
No Jitter contributor Kevin Kieller turned me on to a book called “Invisible Women, Data Bias in a World Designed for Men” by Caroline Criado Perez. Far from being a man-hating trope, the book highlights how so many decisions, both before and after the widespread deployment of AI, were based on biased assumptions that didn’t consider the patterns of people who weren’t white males. From perspectives on commonly used words and phrases, to cluelessly and unintentionally biased municipal decisions regarding snow removal, such biases are a frighteningly clear indicator that inherent biases runs very deep. As mentioned in the book, it seems ridiculous that the professor of a literature class at Georgetown University opted to title it “White Male Writers.” Hate mail ensued.
 
This title seems ridiculous at first blush, but is it? Have we not just assumed that when a literature class is offered without further description that it’s of the work of white male writers? This assumption is the insidious nature of bias—it’s so commonplace that we don’t even see it and thus can’t manage the outcomes produced as a result.
 
After considering inherent bias, the next step before even considering the use of an AI-based system is to recognize AI systems, in whatever form they occur, have no common sense — none. As such, the outcomes generated by AI tools are only as good as the data that is input into them. Think “garbage in, garbage out.” This concept can be enhanced by tools that can crunch numbers beautifully but that have a spotty record on analysis. However, on the positive side, AI’s great strength is its reliance of mathematical models to validate conclusions. Nowhere is this truer than in the context of medical care.
 
A recent article in Fortune by Jeremy Kahn makes this vital point: “In the absence of [critical and patient-specific] information, the tendency is for humans to assume the AI is looking at whatever feature they, as human clinicians, would have found most important. This cognitive bias can blind doctors to possible errors the machine learning algorithm may make.”
 
This statement brings me back to the points raised in the incredibly insightful book by Cathy O’Neil called “Weapons of Math Destruction.” In this book, the author, who holds a Ph.D. in Math from Harvard and has taught at leading universities in the U.S., makes several important points, all of which I believe are critical concerning the use of AI.
 
First, AI-generated information must always be placed in context. As I wrote for No Jitter in 2020, “Managers and those who rely on AI-based information must understand the context of both the data that’s input as well as the generated outcome. With additional complexity comes additional responsibility for validation of the input and output.”
 
This point requires careful consideration. In order to provide the “right data” to the application, it’s appropriate to consider first whether you are asking the right questions. Secondly, how is the data being manipulated? Are different factors weighted differently? What justifies the priorities that those algorithms make? Are those priorities correct? Is the system designed to answer questions or to validate desired outcomes?
 
Rarely do answers to these questions come easy, and some of the answers, particularly those regarding how the sausage gets made, can be difficult to secure — particularly when vendors who offer the “AI solution” want to keep their specialized processes private. Until you have a firm grip on the quality of the data, quality of the processes, and overall purpose of the exercise, it’s hard to appreciate the value of the AI-generated outcome and ultimately use it to assist in the operation of a business or management of a problem.
 
This area is of great interest to me for many reasons. And while I am not a litigator, I can certainly see areas where decisions made with AI input are fraught with risk for those who haven’t carefully considered the issues of bias and outcome validity. Until these questions can be thoroughly contemplated and vetted by your company, legal and practical vulnerabilities lurk in the deep.