Remix.run Logo
alwa 2 days ago

> Mostafavi told CalMatters he wrote the appeal and then used ChatGPT to try and improve it. He said that he didn’t know it would add case citations or make things up. He thinks it is unrealistic to expect lawyers to stop using AI. [...] “In the meantime we’re going to have some victims, we’re going to have some damages, we’re going to have some wreckages,” he said. “I hope this example will help others not fall into the hole. I’m paying the price.”

Wow. Seems like he really took the lesson to heart. We're so helpless in the face of LLM technology that "having some victims, having some damages" (rather than reading what you submit to the court) is the inevitable price of progress in the legal profession?

21 of 23 citations are fake, and so is whatever reasoning they purport to support, and that's casually "adding some citations"? I sometimes use tools that do things I don't expect, but usually I'd like to think I notice when I check their work... if there were 2 citations when I started, and 23 when I finished, I'd like to think I'd notice.

OptionOfT 2 days ago | parent | next [-]

> He thinks it is unrealistic to expect lawyers to stop using AI.

I disagree. It worked until now, and using AI is clearly doing more harm than good, especially in situations where you hire an expert to help you.

Remember, a lawyer is someone who actually has passed a bar exam, and with that there is an understanding that whatever they sign, they validate as correct. The fact that they used AI here actually isn't the worst. The fact that they blindly signed it afterwards is a sign that they are unfit to be a lawyer.

We can make the argument that this might be pushed from upper management, but remember, the license is personal. So it's not that they can hide behind such a mandate.

It's the same discussions I'm having with colleagues about using AI to generate code, or to review code. At a certain moment there is pressure to go faster, and stuff gets committed without a human touching it.

Until that software ends up on your glucose pump, or the system used to radiate your brain tumor.

sonofhans 2 days ago | parent | next [-]

> The fact that they blindly signed it afterwards is a sign that they are unfit to be a lawyer.

Yes, this is the crux of it. More than any other thing you pay a lawyer to get the details right.

beambot 2 days ago | parent | prev | next [-]

I disagree with your disagreement. The legal profession is not "working until now" unless you're quite wealthy and can afford good representation. AI legal assistants will be incredibly valuable for a large swath of the population -- even if the outputs shouldn't be used to directly write briefs. The "right" answer is to build systems to properly validate citations and arguements.

hattmall 2 days ago | parent | next [-]

Incorrect legal information is generally less beneficial than no information at all. A dice roll of correct or incorrect information is potentially even worse.

kulahan 2 days ago | parent [-]

Lawyers are famously never wrong

dcrazy 2 days ago | parent | next [-]

Lawyers are famously subject to sanctions and malpractice lawsuits if they, say, don’t read the motions they file on your behalf.

kulahan a day ago | parent [-]

Yes yes, there are many annoying rules in the legal system meant to keep regular people from having any legal power directly.

DannyBee 2 days ago | parent | prev [-]

Lawyers are required to actually cite properly and check their citations are correct, as well as verify they are citing precedent that is still good (ie has not been overturned).

This is known as shepardizing.

This is done automatically without AI and has been for decades.

DannyBee 2 days ago | parent | prev | next [-]

Lawyer here. I'm not sure why you think AI will fix the first part. What AI does is not a significant part of the cost or labor in the vast majority of kinds of cases. If you have a specific area in mind, happy to go into it with you. The area where this kind of AI seems most likely to reduce cost is probably personal injury.

As for the last sentence, those systems already exist and roughly all sane lawyers use them. They are required to. You aren't allowed to cite overturned cases or bad law to courts, and haven't been allowed for eons. This was true even before the process was automated complety. But now completely automated systems have been around for decades, and one is so popular it caused creation of the word "shepardize" to be used for the task. So this is a double fault on the lawyers part. These systems are integrated well too. Even back in 2006 when I was in law school the system I used published an extension for Microsoft Word that would automatically verify every quote and cite, make sure they were good law and also reformat them into the proper style (there were two major citation styles back then). It has only improved since then. The last sentence is simply a solved problem. The lawyer just didn't do it because they were lazy and committed malpractice.

yibg 2 days ago | parent | prev | next [-]

I don't really see how this is any different from checking for work from another human. If a lawyer tasks another staff to do some research for citations, and the staff made up a bunch of them and the lawyer didn't check, that lawyer would be responsible as well. Just because it's AI and not a person doesn't make it less of an issue.

2 days ago | parent [-]
[deleted]
JumpCrisscross 2 days ago | parent | prev | next [-]

> AI legal assistants will be incredibly valuable for a large swath of the population

In my experience they're a boon to the other side.

Using AI to help prepare your case for presentation to a lawyer is smart. Using it to actually interact with an adversary's lawyer is very, very dumb. I've seen folks take what should have been a slam-dunk case and turn it into something I recommended a company fight because they were clearly using an AI to write letters, the letters contained categorically false representations, and those lies essentially tanked their credibility in the eyes of the, in one case, arbitrator, in another, the court. (In the first case, they'd have been better off representing themselves.)

freejazz 2 days ago | parent | prev | next [-]

You could, give an example to support your argument as opposed to just telling everyone that you are right.

pempem 2 days ago | parent | prev [-]

There are - we have to accept - alternate solutions as well.

alach11 2 days ago | parent | prev [-]

> using AI is clearly doing more harm than good

How do you know this? Wouldn't we expect the benefits of AI in the legal industry to be way less likely to make the front page of HN?

LiquidSky 2 days ago | parent | prev | next [-]

His response is absurd. This is no different than having a human associate draft a document for a partner and then the partner shrugging their shoulders when it's riddled with errors because they didn't bother to check it themselves. You're responsible for what goes out in your name as an attorney representing a client. That's literally your job. What AI can help with is precisely this first level of drafting, but that's why it's even more important to have a human supervising and checking the process.

bawolff 2 days ago | parent | prev | next [-]

> He thinks it is unrealistic to expect lawyers to stop using AI

Sure. Its also unrealistic to expect nobody to murder anyone. That's why we invented jail.

neilv 2 days ago | parent | prev | next [-]

I came here to quote that exact part of the article.

My guess is that he probably doesn't believe that, but that he's smart enough to try to spin it that way.

Since his career should be taking at least a small hit right now, not only for getting caught using ChatGPT, but also for submitting blatant fabrications to the court.

The court and professional groups will be understanding, and want to help him and others improve, but some clients/employers will be less understanding.

tdeck 2 days ago | parent [-]

The thing is, this statement is doing as much harm to his reputation as the original act, if not more. Who would hire this lawyer after he said something like that?

FireBeyond a day ago | parent | prev | next [-]

> We're so helpless in the face of LLM technology that "having some victims, having some damages" (rather than reading what you submit to the court) is the inevitable price of progress in the legal profession?

Same with FSD at Tesla, there's many people who think that accidents and fatalities are "worth it" to get to the goal. And who cares if you, personally, disagree? They're comfortable that the risk to you of being hit by a Tesla that failed to react to you is an acceptable price of "the mission"/goal.

yieldcrv 2 days ago | parent | prev [-]

> 21 of 23 citations are fake

This was from the model available in June 2023

I've taken this hallucination issue to heart since the first time this headline occurred, but if you just started with leading LLM's just today, you wouldn't have this issue. I'd say it would be down to like 1 out of 23 at this point.

Definitely keep verifying especially because the models available to you keep changing if you use cloud services, but this September 2025 is not June 2023 anymore and the conversation needs to be much more nuanced.

tdeck 2 days ago | parent [-]

Frankly I'd argue that something that produces 1 in 23 fake citations may be worse than producing 21 fake citations. It's more likely to make people complacent and more likely to go undetected.

People have more car crashes in areas they know well because they stop paying attention. The same principle applies here.

DannyBee 2 days ago | parent | next [-]

All citations should have been shepardized. This is standard practice for lawyers for decades. Court rules always require you only cite good law. So you will be excortiated for valid but overturned citations too.

This is actually one of the more infuriating things about all of this. Non-lawyers read this stuff and they’re like oh look it hallucinated some cases and citations. It actually should still have been caught 100% of the time and anyone submitting briefs without verifying their cites is not fit to be a lawyer. It's malpractice, AI or not.

yieldcrv 2 days ago | parent | prev [-]

yep, possibly. I’m glad we have a way to see how the situation has improved