April 2, 2026

"My law clerks would be wasting 30, 45 minutes, an hour, developing a chronology of events. This thing does it instantaneously.... I’m not strictly relying on an AI tool. … It’s just an extra set of eyes."


Says Xavier Rodriguez, a federal judge in Texas, quoted in "Judges are increasingly using AI to draft rulings and prepare for hearings/A study found over 60 percent of surveyed judges have used AI in their work, even as some experts worry AI’s unreliability could compromise their authority" (WaPo)(gift link).

A study found over 60 percent of surveyed judges have used AI — that is to say, over 60 percent admitted to researchers that they've used AI. I've got to wonder what percent have used AI. How was the question asked? Was it "Have you used AI?"? Because what does "use" mean? Maybe things that aren't really substantive don't count. Maybe it doesn't count if you only rely on things you — that is,  your clerks — have double checked.
“I’m uploading everything,” Rodriguez said, describing how he would use AI to begin reviewing a summary judgement request in a hypothetical age discrimination case. “And then I’ll ask, ‘Identify any potential statements made in this age discrimination case that appear discriminatory.’”

"Grok, what do you think of that?," I asked, giving Grok the link to the WaPo artible. Grok said:

This is a natural and mostly positive development, but it demands rigorous guardrails. Courts are drowning in caseloads, and good AI can act as a powerful research assistant or summarizer—freeing judges to focus on the human elements of judging: weighing evidence, applying judgment, interpreting law in context, and ensuring due process. Tools that generate neutral timelines or flag relevant precedents (when properly verified) are efficiency gains, not replacements for reasoning. Judges aren't (and shouldn't become) rote bureaucrats; they're decision-makers who must own the outcome....

Ah, the human elements of judging — you know, the part where you feel you know where the right answer lies. And where did that feeling come from? Something prompted you, something somewhere in that supposedly neutral timeline or flagged precedent that you read to get up to speed.

69 comments:

Reddington said...

Do these judges understand the AIs tend to give you the answer they think you want? “Identify any potential statements made in this age discrimination case that appear discriminatory” is a great prompt for getting a confirmation that there are such statements, but ask it differently and you’d get the opposite result.

Justabill said...

AI sneaks in over the transom. Type in a search and the first thing you get is the AI summary. Takes a genuine act of will not to rely on that if you’re rushed, I imagine.

Justabill said...

I would worry more about the hallucinated legal citations than the factual summaries of the record.

FredSays said...

“even as some experts worry AI’s unreliability could compromise their authority.” Sorry, the authority ship has sailed.

narciso said...

What evidences judges know how to do this

narciso said...
This comment has been removed by the author.
Eric the Fruit Bat said...

There's an evolutionary explanation, possibly.

Money Manger said...

It's what word processors did to the typewriter.

RideSpaceMountain said...

An aspect of Augmented Interrogation that's seldom discussed is why it's doing the heavy lifting that search engines used to. I'm now old enough to remember an era when search engines didn't suck. Before SEO and advertising made them practically unusable. When logical queries more often than not produced the data, boilerplate, or direct link to the thing typed in the box. Don't get me wrong, the summarization feature of LLMs is helpful, but I grade it as marginally so, especially when professionals still need to check their work.

It's frustrating to imagine that the same thing that happened to search engines will eventually happen to AI. That it will eventually be filled to the gunwales with so much garbage and attempts by commercial interest to game the function that it will become just as useless, and I'm pretty certain that it's probably happening as I type. 

Aggie said...

What will the solution look like? Will human-generated intellectual content still rule as the more valuable thing, and if so, how will it be 'Certified Fresh'?

JK Brown said...

"My law clerks would be wasting 30, 45 minutes, an hour, developing a chronology of events. "

And therein lies the rub. AI can accomplish the task quicker, but the law clerks don't engage intimately and so don't learn from their work.

AI is just accelerating the problem but it is one that has arisen since machine automation started. In the 1980s, the ways of visualizing and thinking from pen and paper mechanical drawing fell to CAD. Yes, the latter is more productive and faster, and is good for work, but not for the student.

As you work with a process by mind and hand, you learn little bits the professors don't even know to teach. Those are being lost as those who "did it the hard way" are dying out.

When the magic boxes break, those trained to technology instead of educated flounder. For 40 years, the colleges and universities have taught how to use the technology not educated.

But all is not lost, some tiny few will need to know and thus learn, being abused for inefficiency all along the way. Until that day, that knowing how the underlying process helps save the day when the technology dies unexpectedly at a critical moment.

Real education must ultimately be limited to men who insist on knowing, the rest is mere sheep-herding.
Ezra Pound

Enigma said...

Given that LLMs base their output on fuzzy human products and probabilities, the big issue for users are the Unknown Unknowns. If you are an expert already, it's easy to spot and call out AI mistakes. If not, then you'll not probe deeper, swallow everything, and be as vulnerable as a green rube joining a poker game.

Per @RideSpaceMountain, LLMs bring web search back to quality of the early Google pre-SEO era, and before Wikipedia became a propaganda front. AI tools are incredibly efficient at grunt work and getting one started on a new topic; this is new and distinctly superior than anything that came before.

Per my interrogation of AI tools about how they work, the providers are FULLY AWARE that others are trying to corrupt the data and that AI-to-AI content layering is a dead end. They are trying to harvest paper data from the 'golden age' before the web and before gaming the system was possible. So, AI content stands to become half walled-garden/proven library and half cleaned-up new human output.

Lazy and ignorant professionals often get the short stick, but they stand to be publicly humiliated by sole reliance on AI.

Paul said...

AI is only as good as the data and algorithm.. GIGO..

Balfegor said...

Maybe it doesn't count if you only rely on things you — that is, your clerks — have double checked.

The risk there is the things they ought to have checked but didn't because AI didn't highlight it to them. Although to some extent, that's the responsibility of the parties, and if the parties failed to highlight something important -- legal authority, a piece of evidence, etc. -- the average judge and his clerks aren't likely to pick up on it.

But the legal system isn't about establishing "truth" or "justice" after all. It's all just a kludge to reach a reasonable resolution of the dispute as presented. And to that end, AI might not be perfect, but it doesn't have to be. Is it as good as a replacement level law clerk? For most purposes, probably yes. Certainly it's pretty normal for human attorneys to miss stuff, sometimes very important stuff. The hallucination issue is a problem, but given that AI can already manage programming tasks with much more intricate dependencies than the legal code, I think it's solvable even with current technology. The bottleneck is probably that to the people with the brains to solve it, legal pettifoggery is a low priority.

boatbuilder said...

Nothing wrong with this. It has been common for judges and lawyers to cite block quotes from prior decisions which include cites to other cases, etc., for purposes of summarizing the state of the law. Isn't this what the judge is doing here? The key is to make sure that what you are cutting and pasting is a correct summary, or correct statement of the law (or timeline).
(My first foray into AI legal research generated a very confident response including a direct quote from a recent state Supreme Court decision directly on point and aligning with the position favorable to my client. I checked the decision--the quote came from the dissent and the decision had in fact firmly established that the law was the opposite).

Lem Vibe Bandit said...

Woman DA Hardest Hit:
ATLANTA, Ga. (Atlanta News First) — Calling her efforts to oppose a new trial for a convicted murderer “expanded legal research,” a metro Atlanta attorney has admitted using AI to cite nonexistent cases in a recent appearance before the Supreme Court of Georgia.

In a signed affidavit, Deborah Leslie - who is listed as an attorney for the appellate and assets forfeitures unit in the Clayton County District Attorney’s Office - has apologized for citing some cases that don’t exist.

On March 18, Payne appeared before the state Supreme Court as it was hearing arguments in Hannah Payne’s attorneys’ request for a new trial.

Payne was convicted in the May 7, 2019, killing of Kenneth Herring. Prosecutors said Payne witnessed a traffic accident she wasn’t involved in, then followed Herring when he left the scene and blocked in his truck, leading to a confrontation where she shot him.

tim maguire said...

Organizing facts and data is a great use for AI--reduce the mindless busywork. You can check a timeline assembled by AI in far less time than it would take you to build the timeline in the first place.

Unreliability is only an issue if you rely on it.

ColoComment said...


JK Brown said... 4/2/26, 8:49 AM
Your comment reminded me of a book I read some long time ago, about lost and found knowledge: "A Canticle for Leibowitz," by W. M. Miller.

RideSpaceMountain said...

Since everything is about firsts now, between an LLM and KBJ who would make a better SC justice?

boatbuilder said...

Back in the days when dinosaurs roamed the Earth we had to "Shepardize"--check every cite in multiple editions of a massive and constantly updated sourcebook--to make sure that cases had not been reversed or distinguished. We researched law by running down "words and phrases" and reading cases that cited other cases which might pertain to what we were dealing with. It was very inefficient and time-consuming. Lexis was being developed while I was in law school. Nobody owned a laptop or even a PC. There was a single computer terminal in the school library that you could sign up to use. There was generally no waiting list.
The world moves forward.

Tom T. said...

This is risky. At least one district judge has ruled that uploading information to a third-party AI waives attorney-client privilege and work product protection.

bagoh20 said...

When I was in high school I took slide rule class. We were allowed to use a very expensive at the time pocket calculator to check our work, but nobody ever checked the calculator's answers. AI is nowhere near as trustworthy. "Trust but verify", which is diplomatic talk for don't trust.

boatbuilder said...

And yes, it was quite common for judges, lawyers, and clerks to miss key cases and legal arguments, or to confidently cite precedent, unaware that the case relied upon had recently been been overruled or distinguished. We made assumptions because of time pressure. AI is far from perfect, but it saves loads of time. Trust but verify.

Shouting Thomas said...

Scott Adams predicted that professional guilds would try to pass laws that would prevent AI from competing against them. NYS is doing just that. There’s currently a bill before the NYS Senate to prevent AI from giving “substantive” legal advice to users.

boatbuilder said...

Heh, Bagoh. I think you need to trust up to a point, or you will get buried. You still need to apply your human intelligence to the problem.

narciso said...

Who compiled the study

CJinPA said...

So obviously, lawyers will be submitting fewer billable hours to clients, right? Right?

Left Bank of the Charles said...

I don’t see a fundamental difference between a judge relying on the work of a law clerk and relying on the work of an AI tool. In both cases the judge has to determine to what extent the work product can be trusted.

The use of law clerks could be seen as a primitive type of AI.

Mary Beth said...

4/2/26, 9:08 AM

It's good she's apologized. At first she tried to blame some unnamed reviser for the additions of the hallucinated citations.

From what I've read, Payne was directed by the officer to follow the man who caused the accident when he tried to flee. She confronted him, they fought. It sounds like it might have been self defense if she hadn't been white and it hadn't been Atlanta.

Leland said...

Asking AI to work out the logical order of events actually sounds good, but then asking a computer to understand discrimination is silly. Try asking AI for a definition of "porn".

boatbuilder said...

My son was recently involved in a dispute with his landlord in NYC, which he ended up taking to housing court (he was more or less forced to do so given his landlord's actions). He wrote a demand letter to the landlord and asked me to review it. (He's a textile designer, not a lawyer). The letter was excellent and I told him so, saying that he should have been a lawyer, and telling him that it was far better than the sort of thing I was getting from associates before my recent retirement. He of course used AI.
Frankly AI is going to put a lot of lawyers out of work.
(He went to court and was able to negotiate a very favorable settlement through the court mediator).

Wilbur said...

Boatbuilder:
Years ago I wrote a article, published in the state Bar Journal, regarding a particular criminal statute. In researching it, I came across a (then) 15 year-old District Court of Appeal case throwing out a conviction under a portion of this statute. The opinion cited a state Supreme Court decision as mandating this outcome.

When I read the Supreme Court opinion, I discovered it held exactly the opposite of what the District Court opinion said it did on this point. There was no wriggle room whatsoever. And then, the DCA case was then cited in several cases afterwards as controlling law.

I wondered then and now, did no one at the District Court, no justice nor clerk, ever read the state Supreme Court case it cited to overturn the conviction? And what about the defense attorney who obviously put forth this mendacious argument and citation in the defense brief? Worse, did no one from the State AG's office bother to read the case? Malpractice abounded.

I found it astonishing that such seemingly unethical/incompetent lawyering occurred at that level and that no one ever caught such an obvious lapse. I guess I shouldn't have been.

Fred Drinkwater said...

From "Canticle" comes the mystery of the electron, a "negative twist of nothingness". Also the dread "Fallout" which may or may not be inside the "Fallout Survival Shelter".

One monk developed a mathematical technique for recovering text from burnt portions of pages. Tedious and slow. Sounds now, like what an AI would try to do.

Excellent book.

n.n said...

So, legal begals are a prejudicial basket of correlations.

Justabill said...

I once had a lawyer submit a brief to me relying on a case that was exactly on point. Unfortunately for him, when I cite checked it, it had been reversed on appeal. I pointed this out in my decision, but he continued to cite it in similar cases, leading me to conclude that all he had done was read the conclusion and skip my analysis. Very disappointing.

Anthony said...

So far my actual use of AI has been limited to asking it to write up some of my annual performance review in manager-speak. It actually worked rather well. I gave it a few things to start with, it spit out some mumbo jumbo -- the stuff management likes to hear -- and I ended up editing the first one (it dealt with several different things), used the stuff I edited out to start a new section, etc. I'd had a hard time coming up with, say, three things I'd done 'support the company's strategic vision' or whatever, and it came up with some items I didn't even consider, but. . ..worked. I would've sat there for quite a while otherwise.

Of course, what ends up happening is that I give AI a summary, it spits out a whole lot of verbiage, and then management uses AI to then summarize all the verbiage.. . . .

Jupiter said...

"I would worry more about the hallucinated legal citations than the factual summaries of the record."
Good thing AI doesn't hallucinate when it's summarizing the record.

bagoh20 said...

I don't do legal work, but I use AI all the time to answer questions about industrial equipment and process. It's usually right, but maybe 10-20% of the time it get things wrong, and for some reason when it's wrong it's completely wrong, as in the opposite of correct. It's still, indispensable to me. I'm just always careful and will ask multiple sources as back up on important stuff.

n.n said...

Rough drafts.

holdfast said...

Any Google search will offer an AI answer.

So if you search on Google, you've likely also used AI.

Leland said...

when it's wrong it's completely wrong, as in the opposite of correct

That's been the experience where I work. Asking AI to summarize a meeting transcript is pretty cool. But ask it to provide a cost estimate, and sometimes we find numerator and denominator flipped while the units (just text to it) are correct. You would think if it could find the right formula and variables, then it would easily plug the variables in correctly, but nope.

Indefinitely Extended Excursion™️ said...

@ Leland
Tesla's stock falls as delivery report suggests the company is 'actively sacrificing' EVs ~ Morningstar

$1.37T market cap. 341x PE. There's a lot of hope priced in. If you want a 10% return for all that risk, Tesla needs to generate $137B a year in free cash flow. At 1.69mm vehicles sold, that's a mere $81,000 per vehicle. More than the average price. The math doesn't math. Ah yes, but I forget, it's no longer a car company despite manufacturing and selling cars being its only meaningful revenue source. And investors are willing to pay Musk $1T for this, more than the entire profit history or future of the company.

mccullough said...

Now the judge doesn’t need law clerks. Congress can zero out that budget item.

Enigma said...

Tesla = Bitcoin = memes = dreams

PM said...

AI = BI / fundamentally, Bro Intelligence.

Hassayamper said...

I'm now old enough to remember an era when search engines didn't suck. Before SEO and advertising made them practically unusable. When logical queries more often than not produced the data, boilerplate, or direct link to the thing typed in the box.

How I miss Google circa 2005, when you didn't need to go to the fourth page of results to get what you were looking for, and you could be certain that no 23 year old poli-sci majors at Google had been working hand in glove with Democrat party operatives and Deep State bureaucrats to conceal information that went against the leftist narrative.

narciso said...

You have to know exactly what youre looking for

Indefinitely Extended Excursion™️ said...

My oldest is a member of the NY, DC and California bars, and 2 weeks ago all California lawyers received an ominous letter from the Cali bar concerning AI “hallucinations”.

It turns out even lawyers using top AI legal tools are submitting documents to the court with erroneous citations and other issues.

The bar requested every submission using AI must have a human being, a lawyer, fine tooth combing every line.

What does this mean practically? Probably that judges, who have good human law clerks to check everything, are fed up and are about to start handing down heavy penalties.

What may this mean for lawyers? Malpractice lawsuits from angry clients who will blame them for judges making adverse decisions on motions and more.

I understand malpractice lawsuits are a more lively area in the U.S., but as the old legal saying states: “As goes California, so goes the nation” And from this nation, the world.

Gem Quincyite said...

AI says it is good, as long as you use "good AI". So, thanks for nothing. It is just like the professional golfer asking his caddy, "is this the right club?". the caddy answers, "of course, if you hit it right."

hombre said...

As a former law clerk I am uncomfortable with AI replacing them. OTOH, it is apparent that many judges eschew the law in favor of their political preferences anyway (E.g., Boasberg and The Weird Sisters of SCOTUS.). Maybe the upside of AI is that fewer lawyers will be spewed out of lefty law schools.

Indefinitely Extended Excursion™️ said...

The law is not just about information, or facts, but also about correctly interpreting those facts. AI struggles to use the right
information because it relies on probability. And it is only capable of correct interpretation by chance, because it lacks a world model. One issue is AI can produce such a volume that the extra work becomes a new industry.

narciso said...


Blame AI

Julie Kelly 🇺🇸 on X: "As I enter hour three of sitting on the tarmac at DCA--with plans and promises changing every 10 minutes--I have chosen to add to my inner rage by reading Judge Amit Mehta's latest sophistry in his opinion denying Pres Trump's claims (again) of immunity from civil liability for https://t.co/XHD2k5TQbd" / X https://share.google/CB2J9SxHETzQRCiA6

Richard Dolan said...

AI is revolutionizing the practice of law and at a pace that's hard to keep up with. It's not a question of whether it will be used, but only how, with what safeguards (security, privacy and privilege issues are a problem with any AI tool that involves uploading data to a public-facing database) and at what cost. Many products out there, and lots of them (like those from WestLaw) make sure that they don't generate hallucinatory responses --- they just keep getting better (and also more expensive). For the courts, it's a matter of what the administrative office of US courts is willing to pay for -- as in everything, quality does not come cheap. But no question that it's here to stay, for the courts and practitioners alike.

Leland said...

Wow, Mehta is hanging his hat on Hutchinson's testimony.

narciso said...

I knew it was stupid but i didnt realize to what degree

Yancey Ward said...

So, I asked Grok to summarize "Moby Dick" in 100 words. Then I asked ChatGTP to summarize Grok's summary in 90 words. Then I asked Gemini to summarize ChatGTP's summary in 80 words etc. etc. By the time I got to 2 word summary, the answer was 42.

narciso said...

A man chases a whale, the whale wins

Gusty Winds said...

I'm getting stuff done in seconds and minutes (with some review and modification of course) that would take me hours or days. If you haven't figured out how to use AI to help you crank out paper/computer work, you are already behind.

boatbuilder said...

Perhaps one practical "hack" to uncover mistakes, misinterpretations and "hallucinations by AI legal tools would be to assign a separate and competing legal tool to analyze the initial AI result and point out any such mistakes, misinterpretations and hallucinations.

Lawyers and judges have been creatively mis-citing and misinterpreting caselaw for centuries. Sometimes it's accidental. Sometimes it is purposeful.

Enigma said...

@boatbuilder: Software started with LexisNexis, went to AI, and will now go back? Could happen. Could be that busybodies like to keep busy.

boatbuilder said...

I would guess that LexisNexis and Thompson Reuters are desperate for some way to stay afloat at this point. They offer AI research tools but why buy the cow when the milk is free.

ALP said...

AI has saved me time as an immigration paralegal. Example: I draft Outstanding Researcher and O-1A Extraordinary Ability petitions for scientists. I am NOT scientist. Neither are the immigration officers reading what I write. Imagine trying to summarize and simplify complex scientific ideas when you are NOT a scientist. It's tough and it would take me considerable reading, asking the client to clarify/simplify, asking my science geek partner to explain...

Enter Grok. I copy/paste the abstract from the scientist's most cited research paper, and ask it to "Explain this to me like I'm a child - dumb this down so a non scientist can understand this". I also draft the "Expert Letters" we often obtain from peers of the foreign national - now I have to figure out where the common ground is between two scientists and explain in this letter how working together furthered the research. I'll ask Grok: "Here are the names of two scientists that often work together. Clarify the role each had in the research and how they are complimentary or support each other". I could have saved a TON of time if I had such a tool in the past.

Josephbleau said...

I think that you have the same thing with paralegals. They could easily do this, a high school student could do it, and an ai could do it, you would need to check all their outputs for error, you have to check your own output for error even if you are a mighty hls grad.

That is just the problem with the delegation of intellectual activity, does the super genius solve the differential equations and have the comfort of knowing that he followed all the steps or does she take the answer from ai or matlab and validate it by plotting it on a graph, or let a grad student do it and trust them. It’s not a new question at all.

MadisonMan said...

I'm interested that the judge thinks time is wasted by developing a chronology of events. If you do it yourself, you have a much better idea of the chronology, which may be important.

I recall some text I had to transcribe from written letters to pdf. I could have OCR'ed it, but because I did it by hand, I know exactly what is in the pdf.

narciso said...

yes chronology is kind of important,

Jim at said...

Considering some of the rulings they're making, AI isn't looking so bad.

mccullough said...

Let’s be honest. The lawyers are still charging their clients more. So what’s the point of AI in the law.

Craig Mc said...

We use AI in our jobs every day. So far it's been a positive, if not complete, experience. Don't expect it to do your job - it's not there. Yet.

boatbuilder said...

As Grok notes, "courts are drowning in caseloads." Contrary to popular belief, or perhaps TV-generated fantasy, an awful lot of what goes on in the courthouse is not careful, reasoned decisionmaking, but processing cases through the system. It is far from perfect, but if AI can streamline the process and free up real decisionmaking, we are better off for it.

Post a Comment

Please use the comments forum to respond to the post. Don't fight with each other. Be substantive... or interesting... or funny. Comments should go up immediately... unless you're commenting on a post older than 2 days. Then you have to wait for us to moderate you through. It's also possible to get shunted into spam by the machine. We try to keep an eye on that and release the miscaught good stuff. We do delete some comments, but not for viewpoint... for bad faith.