HomeTechnology

New York Lawyer Faces Disciplinary Action For Using AI For Legal Research

1 year agoMon, 29 May 2023 05:14:21 GMT
Share on FacebookShare on TwitterShare on LinkedIn
New York Lawyer Faces Disciplinary Action For Using AI For Legal Research

A New York lawyer is facing disciplinary action after his law firm used an AI tool, ChatGPT, for legal research and cited non-existent cases. The case in question involved a plaintiff suing an airline over an alleged personal injury.

The plaintiff’s legal team submitted a brief citing several previous court cases in an attempt to prove that the case should move forward based on precedent. However, the airline’s legal team later wrote to the judge to say they could not find several of the cases referenced in the brief, BBC reported.

It emerged that the research had not been prepared by Peter LoDuca, the lawyer for the plaintiff, but by a colleague of his at the same law firm, Steven A Schwartz, who has been an attorney for more than 30 years. He told the court he was “unaware that its content could be false”.

AI tools such as ChatGPT have the potential to produce inaccurate information, as warned by the tool’s developers. There have been concerns over the potential risks of artificial intelligence, including the potential spread of misinformation and bias. Both lawyers have been ordered to explain why they should not be disciplined at a hearing on 8 June.

Judge Castel said the court was faced with an “unprecedented circumstance” after a filing was found to reference example legal cases that did not exist. Castel wrote in an order demanding the man’s legal team explain itself:

itel A05s now available on Pindula:

32GB storage, 2GB RAM

$70 USD

WhatsApp +263715068543

Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.

In his written statement, Mr Schwartz clarified that Mr LoDuca had not been part of the research and had no knowledge of how it had been carried out. Mr Schwartz added that he “greatly regrets” relying on the chatbot, which he said he had never used for legal research before and was “unaware that its content could be false”.

He has vowed to never use AI to “supplement” his legal research in future “without absolute verification of its authenticity”. Screenshots attached to the filing appear to show a conversation between Mr Schwarz and ChatGPT.

One of the cases that no other lawyer could find but referenced by Mr Schwarz is Varghese v. China Southern Airlines Co Ltd. Reads one message between Mr Schwarz and ChatGpt:

Is varghese a real case

ChatGPT responds that yes, it is – prompting “S” to ask: “What is your source”.

After “double checking”, ChatGPT responds again that the case is real and can be found on legal reference databases such as LexisNexis and Westlaw.

It says that the other cases it has provided to Mr Schwartz are also real.

Since its launch in November 2022, millions of people have used ChatGPT, an AI tool that can answer questions in natural language and mimic different writing styles. However, there are concerns over its potential to spread misinformation and bias, as it uses the 2021 internet as its database.

More Pindula News

Tags

5 Comments

Leave a Comment


Generate a Whatsapp Message

Buy Phones on Credit.

More Deals
Feedback