All of a sudden, It seems that artificial intelligence (AI) is becoming a hot topic (read: revenue generator) for media outlets. During a football or hockey game, or even the evening’s news, companies with AI-based products to sell are trying to convince you with successful-looking people, chic office space, cool cars, and nice sunsets that AI is the greatest thing ever. This may be true, but like almost everything else in life, nothing is black and white.
How entities define AI varies greatly, in the same way that asking the next five people you see to define the color blue. Tailoring these definitions to an individual enterprise’s needs and concerns can make for some lively thought and conversation, while simultaneously creating new perspectives on decision making with AI tools.
Since defining not only what AI is, but learning what it may be able to do in
your space is such a hot topic, with only a little push I secured a spot at this month’s
Enterprise Connect conference to discuss these very issues. And I don’t mean lecture… I mean discuss. I’m looking forward to a great conversation about understanding AI and evaluating its risks and rewards. Enterprise Connect is taking place the week of March 18, in Orlando, Fla., and
this session will take place on the exhibition floor, in EC Theater 2200, from 4:10 p.m. to 4:30 p.m., on the opening day.
Let’s start with this. Alone, AI as a tool is as valuable to the enterprise as strings are to a tennis racquet. Strings are an essential part of what makes the racquet work, but by themselves, they’re simply, well, strings. If you’re being sold -- or selling -- AI-generated information, you absolutely must take it upon yourself to understand the inherent bias or biases in the data manipulation that’s being done. By doing so, you can ensure that the resulting information is properly qualified and can be harnessed as effectively as possible.
To accomplish this, both the enterprise and the vendor must understand and validate the quality of the input data, know the weighting for the different factors in the equations, and recognize that the AI-generated information is only part of the solution. The risk of getting it wrong -- that is, making the wrong decision based on flawed AI (defining “flawed” is a whole other topic) and thus negatively impacting your enterprise’s operations and the ability to generate a return on investment -- can be the ultimate downer.
Important legal considerations come to surface when AI becomes part of an enterprise’s decision-making process as well. It’s always about liability. Where does the liability fall when things go wrong? Time for my favorite answer -- “it depends,” but not considering the legal ramifications of an AI-fueled decision that long or short term has an undesirable outcome is nothing short of bad business, let alone malpractice.
Please join me for this interesting and complex discussion on Monday, March 18! AI’ll (ha!) look forward to seeing you there. If you have questions in advance, please
email me.