Remix.run Logo
Be careful: chatting with AI about your case is discoverable(harvardlawreview.org)
32 points by rogerallen 2 days ago | 12 comments
Terr_ 2 days ago | parent | next [-]

It seems the key here isn't—or shouldn't be—what kind of service the defendant used, but whether something special happens when a service is involved in preparing a message to his lawyer.

IMO if the "for my lawyer" purpose/intent is not in dispute, then it shouldn't matter whether the service is a search-engine, an LLM, a browser-based word processor, or the drafts/sent folders of a webmail client.

The reverse direction is much clearer: Imagine a client receives an obviously-privileged email from their lawyer, and uses a cloud text-to-speech service to listen to it. Should that audio/text be admissible as evidence? Hell no.

pseingatl 2 days ago | parent | prev | next [-]

That's one judge. An audio tape made by a criminal defendant is intended for review by his counsel is a non-discoverable privileged communication. The tape retains this character if reviewed by an attorney-authorized paralegal. What difference exists where the attorney has the tape summarized by AI. I respectfully submit that Hizzoner is incorrect.

We might also ask if the best venue to decide national AI regulation is a single judge sitting in a criminal case involving a fraudster. If Judge Rakoff is correct, then a trade secret shared with AI is no longer a trade secret. This affects not just a single NY criminal defendant, but anyone that runs a company and wants to keep business practices secret. I would submit that this is no way to regulate a field such as AI.

pavel_lishin 2 days ago | parent | next [-]

> What difference exists where the attorney has the tape summarized by AI.

But that's not what happened here.

markisus 2 days ago | parent [-]

But this ruling will surely set precedent for other cases where AI is used. It may cover the case of AI summaries as well.

grepfru_it a day ago | parent [-]

> But this ruling will surely set precedent for other cases where AI is used.

I dont remember which court. But this is typically in that jurisdiction. It can be appealed higher. SCOTUS has not ruled so it’s still up for further argument

cowboylowrez a day ago | parent | prev [-]

I dunno, ruling seems to have a point to me, a non-lawyer. claude is not an attorney and his attorney was not involved until after the claude "conversation". Look at for instance the exchange of emails with your lawyer, are they privileged? Yes, with caveats according to gemini but it could be lying of course. How about if you emailed your mobster uncle asking for advice on how to use a lawyer to keep your guilty ass outta jail? Is that privileged? All of a sudden I'm not so sure.

This seems to be a pretty narrow ruling but maybe I'm missing something not being a lawyer and all.

mbrumlow a day ago | parent [-]

I look a Claude / ChatGPT as an extension of my thought. It should be held as such in all court proceedings. It’s used to allow me to think and reason about things.

Idk how that works with journals today, but anything like this which helps me organize my thoughts should be off limits to discovery.

rogerallen 2 days ago | parent | prev | next [-]

In United States v. Heppner, Judge Rakoff of the Southern District of New York ruled that written exchanges between a criminal defendant and generative AI platform Claude were not protected by attorney-client privilege or the work product doctrine.

anon373839 2 days ago | parent | prev | next [-]

This is a really interesting and well written case update/critique. I agree with the author's that the judge's reliance on Anthropic's fine-print privacy policy does not satisfy the actual legal standard governing privilege. Or if it did, it would raise extremely thorny issues around all of the cloud-based technology products that lawyers and clients use every day.

That said, I note that the court's opinion specifically calls out Anthropic's practice of *training models on user data* as a reason why the defendant could not have expected confidentiality. I do not use these cloud models for anything important precisely because they are operated by companies, like Anthropic, that are completely untrustworthy.

quietsegfault 2 days ago | parent [-]

That was my first thought. If the test is “talking to a lawyer”, and all tools not directly controlled by the lawyer fall outside of the safe haven, then any cloud legal tools are not safe. What a stupid ruling.

grepfru_it a day ago | parent [-]

Lucky for us it can be overturned, which I highly suspect it will be. Or at least this case will define the loopholes to use

spl757 2 days ago | parent | prev [-]

[flagged]