No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

AI Regulation: Holding Data Bad Actors Accountable

AdobeStock_217516620.jpeg

Image: phonlamaiphoto - stock.adobe.com
Artificial Intelligence (AI) has been present in our professional and civilian lives for a while, but both the European Union (EU) and the U.S. Federal Trade Commission (FTC) have recently taken prominent steps to recognize its power and potential harm. As more entities choose to rely on AI algorithms to predict, recommend, or make decisions based on troves of collected data and specially created algorithms designed to generate sound results, governments recognize that there are clear perils inherent in over-reliance on automated decision-making and predictive analytics.
 
As a noted AI thinker Cathy O’Neil has posited many times in her book and on stage (see Weapons of Math Destruction) — bias cannot be eliminated — it can only be identified and managed. No one wants to be “on the hook” in the way that Amazon was back in 2018 when it used AI to identify ideal job candidates, and unintentional but clear and unfortunate bias came through, in a disappointing way.
 
In the U.S., the FTC’s role is to monitor trade practices that are unfair and dishonest. While it has some specific tools of enforcement against dishonest business practices, its jurisdiction is limited and excludes government agencies, banks, and non-profits. When companies mislead or otherwise oversell their products and services, the FTC can act. Most recently, the FTC indicated that it would use its authority to do precisely this, aimed particularly at companies that are selling biased algorithms. Given that all algorithms are inherently biased, this recent  FTC blog post was a loud and powerful shot across the bow.
 
In the post, the FTC identified its willingness not only to intervene but also enforce against companies that misrepresent the capabilities of products they offer. These include those using facial recognition systems, predictive policing algorithms, and healthcare tools, all of which can create negative outcomes for groups affected by inherent bias.
 
Though AI is a relatively new commodity, the FTC has years of enforcement experience with three particular legal tools. Specifically, Section Five of the FTC Act (1970) addresses the sale or use of unfair or deceptive practices. These would include racially-based algorithms. The Fair Credit Reporting Act (1970) has been used to enforce against entities when an algorithm is used to deny employment, housing, credit, insurance, or other benefits based on consumer credit reports and histories maintained by credit reporting agencies. The Equal Credit Opportunity Act (1974) made it illegal for enterprises to use biased algorithms that result in credit discrimination based on race, color, religion, national origin, sex, marital status, age, or receipt of public assistance.
 
AI market players have taken notice of that post, with its no-nonsense tone and expressed commitment to go after AI tools that have potential to do real harm, particularly in disadvantaged sectors. They should also take note of the FTC article, “Using Artificial Intelligence and Algorithms,” published in 2020.
 
The FTC’s willingness to apply a full-court press to any entity that touches issues related to COVID-19 is significant. It’s fair to say that the FTC has been given superpower enforcement authority to manage bad actors that come anywhere near the COVID-19 space, even if those entities rely on biased AI algorithms (and they all are) and inadvertently find themselves there.
 
The EU recently released its proposed long-term regulations regarding misuse of AI. While these two proposals remain in draft form, the EU is also taking strong measures to rein in the potential misuse of AI.
 
The EU documents intend to secure (some might go so far as to say “force”) transparency when such technologies are used by commercial enterprises. However, on both sides of the pond, the issue of government use and misuse of the technologies remains a black hole. As previously mentioned, the FTC has no authority to go after governments, and EU regulations similarly include “carve-outs” for governmental use of the technologies. Since the EU’s documents are only proposed rules, it’s a bit too early to know how and what will be enforced.
 
Either way, AI is here to stay. But it’s important to remember that AI has no conscience. It’s also only as good and reliable as the data it uses to generate results. How the math is computed, and how the elements critical to the calculations are weighted to yield results also remain a mystery — thus the repeated calls for transparency. Lastly, anyone who says that AI isn’t biased is wrong. AI is inherently and definitely biased. As such, it’s not a question of no bias. It’s a matter of identifying the bias and managing to and around it. The assumption that bias doesn’t exist is nothing short of unrealistic.
 
AI is, can be, and will continue to be a powerful tool. But like so many things, without human understanding of how the underlying algorithms work and what the built-in biases are, the results are likely to be suspect at best, and acutely harmful at worst. Governments across the world are taking notice, and action.