- The Washington Times - Thursday, October 23, 2025

Two federal judges have publicly admitted that they used artificial intelligence in their work, leading them to retract court orders after the AI introduced errors into their rulings.

The summer rulings by U.S. District Judges Julien Xavier Neals in New Jersey and Henry T. Wingate in Mississippi had all the hallmarks of AI “hallucinations,” but that had been speculation until they both confirmed it in response to an inquiry from Senate Judiciary Committee Chairman Charles E. Grassley.

One judge said a law school intern wrongly used the AI, while the other said it was a clerk in his office — though in both cases, the judges themselves were the ones who signed the final rulings.



Both men, in letters to the office that oversees U.S. courts, said they have taken steps to prevent a repeat.

Mr. Grassley, Iowa Republican, praised the judges for coming clean and urged other judges to learn from the fiasco.

“We can’t allow laziness, apathy or over-reliance on artificial assistance to upend the judiciary’s commitment to integrity and factual accuracy. As always, my oversight will continue,” he said.

The judges, in their responses, made clear just how much the AI world has turned into the wild west.

Judge Wingate said his office didn’t have any rules about generative AI. Judge Neals said he has a policy forbidding generative AI, including the ChatGPT that the law school intern used. Judge Neals said the student’s university also had a policy barring AI use and the student appears to have violated that policy as well.

Advertisement

AI hallucinations are when a generative AI tool fabricates a legal citation or argument. Common ones include making up case names, misstating the outcomes of rulings and manufacturing false quotes from legitimate decisions.

The legal world for several years has been rife with stories of AI hallucinations making their way into lawyers’ briefs, and some judges have been stern in slapping sanctions on lawyers who have filed AI-polluted briefs.

But having two federal judges file rulings with AI-induced errors, then refusing for months to own up to the reason for the mistakes, took things to a new level.

Mr. Grassley said it threatened to undermine the credibility of the courts and potentially undermine litigants’ rights.

Susan Tanner, a law professor at the University of Louisville in Kentucky, said the two judges’ explanations should be a pivotal moment for the judiciary.

Advertisement

“The legal system depends on careful deliberation and verification. Generative AI, on the other hand, is built for speed and fluency,” she said. “That mismatch between AI’s quick confidence and the slow, careful work of legal reasoning means we need to be really thoughtful about how we integrate this technology, not just reactive.”

Damien Charlotin, a researcher and lecturer at HEC Paris, maintains a database of AI hallucinations in court cases. In addition to the two federal judges, he lists another case from a state judge in Georgia, as well as 118 instances in which lawyers may have used AI fabrications in their briefs.

Judge Neal, a Biden appointee, had issued an opinion in June that wrongly cited the outcomes of previous precedents and used quotes from other cases that never actually appeared in those earlier rulings. The judge also wrongly attributed statements to litigants.

Judge Wingate, a Reagan appointee, issued a restraining order in July against Mississippi’s law limiting the teaching of diversity, equity and inclusion principles in schools. His opinion fabricated text of state law and cited people who weren’t parties to the case as litigants.

Advertisement

He had previously called the issues “clerical errors.”

Mr. Grassley’s letter prodded the judges to own up.

Judge Wingate said his erroneous opinion was posted prematurely and should have gone through more verification, including a citation-checking tool. He said that’s why he called it a clerical error and didn’t attribute it to AI.

Judge Neal’s office had previously leaked to a reporter that a “temporary assistant” had used AI. In his new letter, he said the law student used ChatGPT to do legal research. He said the draft opinion that included the research was then posted before going through review and citation-checking.

Advertisement

“It was a draft that should have never been docketed,” he said.

Mr. Grassley had wondered why the judges pulled down the erroneous opinions altogether and replaced them with updated rulings, which could be seen as an attempt to hide the past mistakes.

Both judges said they felt it would be wrong to leave flawed opinions on the public docket. But both said the original botched rulings remain with their clerks’ offices.

The Administrative Office of the U.S. Courts, the body that oversees federal judicial operations, is reviewing how AI is used in the courts.

Advertisement

Director Robert J. Conrad Jr., a former federal judge, said AI presents “opportunities” and “concerns” for the courts. His office issued interim guidance in late July, after the two judges’ bungled rulings, calling for accountability in the use of AI but not discouraging it.

The guidance called for courts to review all AI-generated content and to be wary of delegating to AI “core judicial functions,” such as actual decisions in cases. It also recommended judges consider disclosing the use of AI.

Ms. Tanner said written policies aren’t enough.

“What’s really needed is ongoing training and a culture where people learn to use AI carefully and thoughtfully,” she said in an email. “Policies can set boundaries, but without real education about how these models actually work (what they’re good at, where they fall short, and how they can make stuff up while sounding totally confident), users are left thinking they’re safer than they really are.”

• Stephen Dinan can be reached at sdinan@washingtontimes.com.

Copyright © 2025 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.