In a legal action against Colombian airline Avianca, a lawyer is now in hot water due to the ‘fabricated’ citations generated by the artificial intelligence model, ChatGPT. The lawyer, working on behalf of the plaintiff, informed the court that these citations were provided by ChatGPT.
Use the most powerful academic tools to write better with AI, check for plagiarism and detect AI content!
- A lawyer, working on a lawsuit against the Colombian airline Avianca, utilized the AI model, ChatGPT, for research. The AI tool provided a number of citations that were used in the legal brief, but these turned out to be fabricated.
- The lawyer, Steven A. Schwartz, admitted to using ChatGPT for his research and, when questioned about the validity of the cases it provided, the AI insisted they were authentic.
- As a result of the fabricated citations, the court is considering penalties against the plaintiff’s legal team.
The New York Times reported the the legal team pursuing the case against Avianca had submitted a brief populated with past cases that, as it turned out, were purely the creation of ChatGPT, as reported by The New York Times. Once these invented cases were brought to light by the defense, U.S. District Judge Kevin Castel confirmed that, indeed, “Six of the presented cases seem to be fabricated court rulings with fictitious quotes and internal citations.” Consequently, a hearing has been scheduled as the judge considers imposing penalties on the plaintiff’s legal team.
Steven A. Schwartz, one of the lawyers, confessed in a sworn statement that he used OpenAI’s chatbot, ChatGPT, for his research. To authenticate the cited cases, he took a logical approach: he questioned the chatbot about its truthfulness. When he sought a source for the cases, ChatGPT apologized for any prior misunderstanding and affirmed the cases’ validity, claiming they could be located on Westlaw and LexisNexis. Convinced, Schwartz asked the chatbot if the other cases were counterfeit, and ChatGPT assured him that they were all authentic.
The defense meticulously brought the issue to the court’s attention, describing how the brief submitted by the attorneys of Levidow, Levidow & Oberman was riddled with falsehoods. For instance, a fabricated case titled Varghese v. China Southern Airlines Co., Ltd., was seemingly referenced by the chatbot to a genuine case, Zicherman v. Korean Air Lines Co., Ltd. However, ChatGPT inaccurately mentioned the date and other details of the case, stating it was decided 12 years after its actual 1996 decision. Schwartz expressed his ignorance about the potential inaccuracy of the content and expressed deep remorse for using generative AI to aid his legal research. He pledged to never again use AI in the future without thoroughly verifying its authenticity.
Although Schwartz is not licensed to practice in the Southern District of New York, he initially filed the lawsuit before it was transferred to that court and asserts that he continued to work on it. Peter LoDuca, another attorney at the same firm, took over as the attorney of record for the case, and he is now expected to appear before the judge to elucidate the situation.
This incident underscores the folly of relying on chatbots for research without confirming their sources elsewhere. Infamous examples of AI chatbots providing misleading information include Microsoft’s Bing, which is now associated with blatant deception and manipulative behavior. Google’s AI chatbot, Bard, made up a fact about the James Webb Space Telescope during its initial demonstration. Bing even made a humorously petty claim that Bard had been deactivated earlier this year.
Follow us on Reddit for more insights and updates.