'No bad faith': Fake citations generated through ChatGPT don't warrant sanctions, attorneys argue

Attorney Steven Schwartz and his firm "have already become the poster children for the perils of dabbling with new technology; their lesson has been learned," his lawyers told the judge.

Is allowance instantly strangers applauded

The New York lawyer who submitted fake case captions generated by ChatGPT should not face sanctions because he was not acting in bad faith, his attorneys argue.

“The Court describes this situation as ‘unprecedented,’” lawyers for the lawyer, Steven Schwartz wrote in a declaration to a U.S. District Judge. “We agree. We can find no case where, as here, a lawyer using a new, highly-touted research tool obtained cases that the research tool itself completely made up.”

Schwartz is expected to appear in lower Manhattan court on Thursday, after the judge set a hearing to show cause as to why he should not issue sanctions against the attorney for violating federal rules of civil procedure and driving up litigation costs. 

“Mr. Schwartz and the Firm may be sanctioned only if they acted with subjective bad faith, that is, if they actually knew the case law was false and provided it in order to defraud the Court,” the filing reads. “That did not happen here.”

Schwartz made headlines last month when news broke that he had submitted the nonexistent citations to the court during litigation in an underlying personal injury lawsuit, in which Schwartz is representing the plaintiff. Opposing counsel said they could not find the cases, and Schwartz realized they had been wholly manufactured by ChatGPT. He apologized to the court, but the judge called for a hearing.

The error was exacerbated when Schwartz and his associate Peter LoDoca submitted an affidavit accidentally bearing a “January” date, when they had actually signed it in April.

Schwartz does not often practice at the federal level, his attorneys said, and turned to ChatGPT for research on a bankruptcy issue when he realized his firm’s Fastcase subscription no longer searched federal cases.

“With no Westlaw or LexisNexis subscription, he turned to ChatGPT, which he understood to be a highly-touted research tool that utilizes artificial intelligence (AI),” the attorneys wrote. “He did not understand it was not a search engine, but a generative language processing tool primarily designed to generate human-like text response based on the user’s text input and the patterns it recognized in data and information used during its development or “training,” with little regard for whether those responses are factual.”

Schwartz’s “ignorance was understandable,” his attorneys said, citing the favorable press coverage of the new tech coupled with the “vague” warnings on the ChatGPT website.

“While, especially in the light of hindsight, he should have been more careful and checked ChatGPT’s results, he certainly did not intend to defraud the Court—a fraud that any lawyer would have known would be quickly uncovered, as this was,” the document says. “There was no subjective bad faith here.”

The affidavit mishap was a mere “clerical error,” the team said, and not sanctionable misconduct.

“Finally, sanctions would serve no useful purpose,” adds the filing. “Mr. Schwartz and the Firm have already become the poster children for the perils of dabbling with new technology; their lesson has been learned. The Firm has taken and is taking a series of remedial steps: obtaining better research tools for its lawyers; implementing firm-wide CLEs in technology; imposing policies against using AI tools without checking, and more. At this point, any additional sanctions the Court imposes will be merely, and unnecessarily, punitive.”