No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Getting Legal on AI

There are some who claim that reading legal boilerplate makes watching paint dry seem exciting. Some of those people might even be lawyers. But the issues associated with the ever-increasing presence of artificial intelligence (AI) make it more important than ever to consider the associated legal issues and vulnerabilities. In part one of this series, we looked at qualitative issues, as opposed to the quantitative ones, that often arise when considering adoption of AI technologies like "intelligent" analytics, for example. But before an AI investment decision is even made, there are numerous legal considerations that must be given their due.

The definition of "artificial intelligence" is a bit of a moving target, but when defined in its broadest sense, the phrase can easily include not just data processing/machine learning capabilities, but also biometrics, "big data," and Internet of Things (IoT) devices. As electronic devices become smarter and more omnipresent -- both at the enterprise and at home -- practical and legal considerations (including vulnerabilities) will become even more important.

Consideration should be given to some, if not all, of these distinct segments of law: contract, allocation of risk, privacy, product liability, antitrust, international, intellectual property, and communications technology. Of these, contractual considerations are probably the most important for all enterprise users.

portable

I've made a career out of reading and writing what many may consider the driest of boilerplate contractual language, but AI raises a number of issues that may not be obvious, and which warrant careful consideration. The most notable of these is service expectations. It is critical that both parties have a clear understanding of what information the AI product is expected to yield, and how that information will be compiled. The assumption, which is probably true most of the time, is that the origin of that data and its ownership would belong to the contracting enterprise. Vendors can make the claim that their AI process is proprietary ("secret sauce"), but until, and unless, the customer understands what happens in that mysterious black box, it's hard to rely on the data that's generated. This is an issue that comes up time and time again.

Secondly, how will the end user be protected when the inevitable refinement of the underlying AI technologies and processes occur? It can be very helpful to include terms dealing with technology evolution, particularly when the agreement between the enterprise and the AI provider is for more than a single project. The vendor may not like it because they like to be locked in, but from the customer perspective, this is a critical element.

Another essential contractual component should address the question of who bears the risk(s) associated with the secured AI-driven data. The number of risks is totally dependent upon the number of variables and the complexity of the operations being performed. Who will bear the responsibility if (or when) things go wrong, and the data that is relied upon creates a problem that results in legal harm? With this in mind, possible terms of insurance and indemnification must also be carefully considered and evaluated. This is not "same old, same old" at all, and a successful agreement must include a fresh consideration of these otherwise rather sleepy issues.

Moving on from there, a clear termination strategy is essential, especially given the mystery of the actual number-crunching and the enterprise's reliance on the generated data, potentially to its own detriment. As a general rule, auto-renewal provisions are detrimental to the enterprise, and in some states like New York, they're deemed contrary to public policy and not enforceable. But that doesn't keep vendors from including and relying on them in contracts.

portable

Contractual flexibility is also essential in terms of managing unknowns. There are known challenges that accompany increasing reliance on data, whether raw or processed. Hold vendors' collective feet to the fire by insisting upon terms that allow for flexibility when unknown and unanticipated challenges make the AI product, as it evolves, less beneficial than intended. This same contractual flexibility should allow for the provision of system evolution as the technology makes more and potentially deeper analysis possible. Another outside factor is a changing regulatory climate. When the rules change, both products and the agreements that govern their use must be able to accommodate change as well.

While there are other critical legal elements that should be considered, in the interest of space and time, I'm just going to highlight two more: privacy and export control. In the U.S., there are industry-specific rules (think HIPAA and FCRA, among others) that dictate very specific terms that are designed to ensure the privacy and security of individuals' personal information. For citizens of the EU, regardless of where they are physically located, the stakes will be much, much higher when General Data Protection Regulation (GDPR) becomes effective on May 25 of this year (see "Get Ready for GDPR").

With respect to controls on export, despite the fact that data and processing technologies and capabilities are not always something that can be seen or touched, the export of these very things remains an incredibly sensitive topic. Military applications, space and satellite applications, and drones often rely on AI capabilities to function. It is to be expected that in-house experts are well-aware of the obligations imposed on them by the terms of International Traffic in Arms Regulations (ITAR), rules promulgated by the U.S. State Department for the purposes of keeping defense-related technologies within appropriate U.S. government organizations and contractors, and Export Administration Regulations (EAR), similar regulations promulgated by the U.S. Department of Commerce. Enterprises with even potential overseas applications should be well-versed in these rules and not rely on vendors' guidance.

Obviously, many of these topics warrant much further discussion and consideration. But the key takeaways remain that when securing AI-based goods and services, different contract terms should not only be considered, but insisted upon in order to protect the acquirer.

Who says reading boilerplate can't be fun? "Fun" may not be the right word. How about essential?

Related content: