We are all intrigued by the possibilities of Artificial Intelligence. But, what happens when AI goes crazy? One lawyer in New York City will soon find out. The lawyer who represents the plaintiff in Roberto Mata v. Avianca, Inc, Steve A, Schwartz, a lawyer with 30 years experience, submitted a brief that was partly generated by AI. A second lawyer, Peter LoDuca also signed the brief, but apparently did not contribute to it.

The opposing lawyer knew that area of law well. He recognized early on that Schwartz’ court cases did not exist. In fact, the opposing lawyer found six of the cited court decisions did not exist. Federal Judge Castel asked Schwartz to submit an affidavit explaining how he came up with his brief. In that affidavit, Mr. Schwartz admitted that six cited decisions did not exist. He said he relied on ChatGPT to create the brief. The brief included made-up quotes from the non-existent cases. Schwartz said he asked ChatGPT if the cases were real. ChatGPT assured him they were. ChatGPT said the decisions could be found in “reputable databases, such as LexisNexis and Westlaw.” ChatGPT was not correct.

Judge Castel has set a hearing for June 8 for Mr. Schwartz to show cause why he should not be sanctioned. See ABA Bar Journal report here.