No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

AI & Trust: It’s Complicated

AdobeStock_235097303.jpeg

Handshake between digital and human
Image: dTosh - stock.adobe.com
With the power it brings to complex calculations and data analysis, artificial intelligence is driving, literally and figuratively, much of what we see and do, as well as much of what goes on in the background — whether we realize it or not. In its recently released draft report on trust and AI, the National Institute of Standards and Technology (NIST) put it this way: “No longer are we asking automation to do human tasks, we are asking it to do tasks that we can’t.” But like with any newfangled technology, the capability is one thing… trusting the results is something else.
 
In “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy,” data scientist Cathy O’Neil makes the clear point that the outcome of an AI-based process is really valuable only if a) the customer-provided data is sound, and b) the customer has a clear understanding of how the algorithm actually works. That is, a customer looking to rely on output must have a clear understanding of which factors are weighted and to what extent. But many companies offering AI-based “solutions” consider their processes confidential (since essentially that’s what they’re selling), so knowing the value of the output can be very difficult.
 
Defining Trust in an AI Context
In its work on trust and AI, NIST defines trust this way:
“Trust serves as a mechanism for reducing complexity. When we make a decision to trust, we are managing the inherent uncertainty of an interaction partner’s future actions by limiting the number of potential outcomes. … Overall, in the evolutionary landscape, trust and distrust are used to manage the benefits and risks of social interaction. Reliance on another individual can offer advantages, but it simultaneously makes one vulnerable to exploitation and deceit. If you trust too little, you will be left wanting; trust too much and you will be taken advantage of. Game theory research has confirmed that conditional trust, a strategy for discerning between the trustworthy and untrustworthy, is evolutionarily advantageous. As such, trust was fundamental to our survival and continues to drive our interactions.”
Companies must carefully balance the potential power of AI tools to achieve certain outcomes against the risk of doing so and getting it wrong. On the upside, companies can compile and massage vast volumes of data to support conclusions used in complex transactions from microsurgery to nuclear reactor construction. However, without trust in the outcome, the risks in taking any action can be overwhelming. Additionally, companies must recognize the difference between technical statistics and outcomes and trustworthiness. The European Commission has created models and tools for considering these issues head-on: the Assessment List for Trustworthy AI and the European Union regulation on fostering a European approach to AI.
 
Also important to recognize is that despite what any marketing person claims, the elimination of bias, even in code, is unreasonable — we all have biases based on our training, education, or conditioning. It’s important never to lose sight of this. What is possible is the identification and management of bias. While measurements are helpful in managing these powerful tools, they are not totally determinative. This is because, at the most basic level, code is without common sense or emotion, both of which can play critical roles in complex decision-making.
 
The Essence of Trust
In its draft document, NIST has identified nine necessary factors, with various and fluctuating weights, that it deems necessary to create or support user trust. These are: accuracy, reliability, resiliency, objectivity, security, explainability, safety, accountability, and privacy. For each and every situation where an AI offering can be beneficial to an enterprise, a consideration of these factors is critical if the outcomes are to be trusted. In addition, I would suggest that another factor — current-ness (as opposed to currency, which is something else entirely) — also is essential to a successful deployment and utilization. That is, does the process/algorithm that’s being offered provide a current, sophisticated, and solid approach that reflects the first nine factors?
 
As NIST noted in its document, as far back as 2002, when Bill Gates headed Microsoft, he sent a memo to employees defining the phrase “trustworthy computing.” Long before the current ubiquitousness of AI, he recognized the challenges of creating products that consumers trusted to identify, process, and store information. He defined the phrase this way: “…customers will always be able to rely on these systems to be available and to secure their information. Trustworthy Computing is computing that is available, reliable and secure.”
 
Unquestionably, AI tools bring incredible power and resources to decision makers at all levels of essentially everything. From managing efficiencies to analyzing tumor data to help surgeons and the robots who guide surgeons, AI offers incredibly valuable input. But unless that output is reviewed and trusted, by those who use it to make the decisions and those who are affected by those decisions, its relevance and importance may not stand on solid ground.
 
If you’re grappling with how and when to use AI for your organization, join me at Enterprise Connect 2021, Sept. 27-29 in Orlando, Fla., for my session: “Regulation, Bias & Ethics in AI: Impact on Your Enterprise.” I’ll get you up to speed on federal action around AI, dig deeper into how bias comes into play, and explore ways of approaching AI use ethically. As a No Jitter reader, be sure to use the code NJAL200 to save $200 off the current registration rate!