AI Objection

We know that for all the hype surrounding ChatGPT and these "AI" models, they're actually big liars. I showed personal examples of this in March. Now a lawyer is also learning the hard way that if ChatGPT doesn't know something, it confidently lies about it. Wes Davis at The Verge has more.

After opposing counsel pointed out the nonexistent cases, US District Judge Kevin Castel confirmed, “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” and set up a hearing as he considers sanctions for the plaintiff’s lawyers.

Lawyer Steven A. Schwartz admitted in an affidavit that he had used OpenAI’s chatbot for his research. To verify the cases, he did the only reasonable thing: he asked the chatbot if it was lying.

When he asked for a source, ChatGPT went on to apologize for earlier confusion and insisted the case was real, saying it could be found on Westlaw and LexisNexis. Satisfied, he asked if the other cases were fake, and ChatGPT maintained they were all real. 

The opposing counsel made the court aware of the issue in painful detail as it recounted how the Levidow, Levidow & Oberman lawyers’ submission was a brief full of lies. In one example, a nonexistent case called Varghese v. China Southern Airlines Co., Ltd., the chatbot appeared to reference another real case, Zicherman v. Korean Air Lines Co., Ltd., but got the date (and other details) wrong, saying it was decided 12 years after its original 1996 decision.

Schwartz says he was “unaware of the possibility that its content could be false.” He now “greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.”

The Verge

Therein lies the issue with ChatGPT and all these large language models: They're all out in the wild for us to use but no guardrails exist. It's why Bing's Sydney AI threatened users when pressed for more. These systems are next-word-guessing machines and nothing more. Companies rolling out access warn people it's experimental but we all know people are going all-in with this stuff. And unless you vet what it tells you is 'fact' you can get caught in some real hot water.