No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Using ChatGPT to Practice Law? Use Common Sense Instead

Let’s start with this basic premise as our foundation: AI has no common sense. Period. Sadly, however, sometimes those who over-rely on AI for important decisions don’t either. Keep reading. 

In late May, Thomson Reuters, which bills itself as a global content and technology company, announced that while investing $100 million on what it terms “AI capabilities,” it has created a new plugin with Microsoft 365 Copilot which, according to its press release will “bolster efforts for redefined professional work starting with legal research, drafting and client collaboration.”  Sounds useful -- but only if those who rely on this collaboration recognize the challenges that it creates, not only for those who are allowing AI to support/aid/direct their legal research topics, but for those who are relying on that research to make decisions. Reminder: AI has no common sense. (You may sense a theme here) … 

Again, to quote from the press release: 

 “Generative AI empowers professionals to redefine their work and discover innovative approaches,” said Steve Hasker, president and CEO, Thomson Reuters. “With our customers in the driver’s seat, Thomson Reuters AI technology augments knowledge, helping professionals work smarter, giving them time back to focus on what matters."   

This is possibly true in many cases, but likely not all, and given the bad behavior that has surfaced over just the past few months (read on), generative AI can be so easily both misused and misled, that the risks are beyond extreme. 

Legal research is a complex process that relies on careful reading and, most importantly, identifying issues and subtle distinctions (think hair splitting, actually). Is this something AI can do?  Possibly, but possibly not. In her well-respected and read 2016 book Weapons of Math Destruction, author Cathy O’Neil goes to great lengths to discuss bias, and how the biases we carry inform the kind of questions that we ask. Whether ordering a milkshake or evaluating options for presenting a legal argument, how any question is posed may make a huge difference in the answers, whether it be for the recipe for the ice cream for the shake or the best legal answer. 

Considering the subtleties upon which litigation often relies, the phrasing of even the simplest of questions can expose biases that impact outcomes in both anticipated and unanticipated ways. The use of generative AI in this context is just another tool to be used to identify potential outcomes and does not, in and of itself, differ from other forms of legal research. For the record, lots of good lawyers (and bad ones too) don’t find the right cases and identify weak arguments all by themselves. But the use of generative AI to do the (often) drudgery of legal research adds yet another layer of variables that may yield inaccurate, misleading and potentially harmful results.  

Chat GPT is probably the most famous AI tool. According to Robert Harris, Chat GPT is designed to provide “a slightly more random sentence construction instead of the very best sentence construction in order to sound more creative.”  Who decides what the best sentence construction is remains a whole other matter.   

Recently, several lawyers have found themselves in trouble over the use of Chat GPT. In one case, an attorney submitted a brief in federal court that relied upon cited cases that do not exist. The case is, in fact, a real personal injury matter filed against Avianca Airlines. There seems to be a record of the attorney’s communications with ChatGPT, indicating that the attorney relied on what the tool turned up without verifying the actual content of the response and then, without validating the response, including case names and citations, included those findings into the pleadings. Again, attorneys who don’t use AI tools also make errors, but not at this level, and certainly not in federal court. The attorney will appear in early June where both the attorney and his firm are expected to be sanctioned by the court. To me, that’s the least that can be done. 

While I won’t go so far as to suggest that use of Chat GPT or other AI tools is full-fledged cheating, it’s not a big jump to see reliance on such tools as precisely this. Saving valuable billable hours might be beneficial for clients, but without careful attorney vetting of what the AI tool has found, reliance on search results may create more problems than it solves. There’s also no indication that an attorney would take a lighter hand in billing for having used AI tools to save costly research time. Just worth considering… 

Robert Harris commented that this problem isn’t entirely the fault of the attorneys. “The entire IT industry is telling the attorney that if he’s not already using ChatGPT or another tool, he/she is already falling behind.” I take some issue with this, because at the end of the day, the attorney remains responsible for the quality of his work product, but Robert makes a worthy point. 

In a medical context, AI tools have been useful in validating human conclusions because of the masses of data that can be analyzed. But the AI tools aren’t making the decisions. They’re just supporting those decisions. This is a critical distinction, particularly when medical malpractice claims are a distinct possibility when an error—be that in judgment or application—is made causing a negative patient outcome. 

AI tools, including, but not limited to Chat GPT. aren’t going anywhere. But how they’re used, and how their use is disclosed, and how the machine generated conclusions are validated, remain key issues demanding careful consideration. Reminder: Sexy and speedy features aside, AI still has no common sense.