As AI continues to be deployed, it’s only reasonable to assume that litigation around its use will become more common. We are at the dawn of the age where cases based on some form of AI-generated outcomes will provide the basis for a litigable cause of action. There is also a very hungry group of class action lawyers looking to jump on the next big fertile area of profitable litigation. In any case, it’s time to consider the legal vulnerabilities that are likely to be associated with AI use—particularly in the UCaaS and CCaaS spaces so that entities deploying these tools do so with their eyes open.
The first area of concern involves how these technologies have the potential to jeopardize individual privacy rights. An ad for a now frowned-upon company once had a tagline of “if [the data is] not out there, it can’t be stolen.” I would also add, if the data’s not out there, it can’t be misused either. The fact is that many of us are way too cavalier about sharing personal information in the name of convenience. We know the information is "out there." And whether a prompt warns us that the products and technologies that support such interactions will be capturing and using our personal data -- as callers, chat bot users or texters -- is provided or not, we are unlikely to know exactly who has our data and what they’re planning to do with it.
According to Blair Pleasant, president & principal analyst, COMMfusion, and BCStrategies co-founder, “We know that some CCaaS vendors use the content of their customers’ calls to train their AI - that’s no industry secret. But – do the end customers know that their conversations are being used to train the CCaaS vendors’ models? The customers may have given consent to the company they’re interacting with, but not to the CCaaS platform that is intercepting their calls. It’s a very gray area.”
Many of the biggest names touting their AI-powered offerings in the CCaaS and UCaaS space have been sued. More suits are guaranteed to follow. And when there’s a cause of action, the plaintiff’s strategy is almost always “sue everyone and see what sticks.” The same can be said about throwing spaghetti against the wall, but I digress.
The issue that I expect will gain the most traction, at least for now, is that of privacy, but these matters are always much more complex than they appear at first glance. First, the United States does not have an overarching privacy law. For the time being, the federal privacy protections in place in the United States have been written and adopted for specific purposes. HIPAA protects health care information and other federal statutes provide their own siloed privacy regulations, but unlike the European Union, there is no national privacy law or standard. This, in fact, differs widely from the way privacy has been--and will continue to be--protected in the EU. In addition, there are state privacy laws—the most well-known and probably most restrictive are in California. While other states may have similar rules and regulations, many states have no all-encompassing privacy statutes, thus creating a wild west of privacy enforcement requirements.
Secondly, the EU has intentionally taken a leadership role in the drafting and enacting enforceable legislation regarding the use of AI. Like GDPR which came before it, the obligations/requirements of both pieces of EU-based legislation reach far beyond its physical borders to cover both citizens of the EU who may be out of the country, and non-EU citizens who are present in the EU at the time the privacy violation occurs.
As end users continue to deploy AI-based tools in their operations (whether they be inward or outward facing), their technical and legal staffs should be acutely aware of how the data provided to the AI process will be not only used, but stored and shared. Customers also need to have the opportunity to know about the security of the information that’s being shared: who has it, what those actors are doing with it and how safe the information REALLY is.