Remix.run Logo
All Those A.I. Note Takers? They're Making Lawyers Nervous(nytimes.com)
30 points by JumpCrisscross 2 hours ago | 18 comments
burningion an hour ago | parent | next [-]

The main point raised in the article is that these bots may void attorney client privileges.

But the real danger with these IMO is that they're turning casual conversations into a permanent record, and one that will be completely discoverable in court, should the company get into trouble later.

coffeebeqn an hour ago | parent | next [-]

Plus they are super inaccurate. Gemini gets one of its three bullet subtly or very majorly wrong almost every time. Just a few weeks ago Gemini said we’re rolling out our payment setup in Russia. You know the place where we have 20+ sanctions packages on? We were talking about France in the meeting.

operation_moose 39 minutes ago | parent | next [-]

We've found they're surprisingly good if everyone on the call is using a decent headset.

The problems start when using conference room audio or someone is on their laptop mic. If they miss a word they never do unintelligible, they just start playing madlibs based on the rest of the sentence.

We just went through a round of 100+ (non-sensitive) VoC interviews and they really cut down the workload of compiling all of the feedback. If the audio was a little shaky though, we pretty much had to throw away the transcripts and do them from scratch like we used to.

user_7832 35 minutes ago | parent [-]

> If they miss a word they never do unintelligible, they just start playing madlibs based on the rest of the sentence.

Imo this is the single biggest flaw of LLMs. They're great at a lot of things, but knowing when they're wrong (or don't have enough information to actually work on) is a critical flaw.

IMO there's nothing structural about why they shouldn't be able to spot this and correct themselves - I suspect it's a training issue. But presumably bots that infer context/fill in the dots rank better on what people like... at the cost of accuracy.

pjc50 44 minutes ago | parent | prev [-]

Given how financial services can impose silent inexplicable lifetime bans for using the wrong words in the "what is this transaction for" field, I'm wondering at what point the AI automatically reports people for sanctions violation based on its mishearing.

watwut 18 minutes ago | parent | prev [-]

Basically, it will be harder to hide illegal and unethical stuff companies routinely engage in.

Tistron an hour ago | parent | prev | next [-]

https://archive.is/wPKhf

skinfaxi an hour ago | parent | prev | next [-]

https://www.nytimes.com/2026/05/09/business/dealbook/ai-note...

pfortuny 43 minutes ago | parent | prev | next [-]

Honest question:

Do these systems not share data with the AI servers? Or are they all local (on-site, not on-computer)?

I am totally baffled by the trust people put on these systems, sharing with them the most obviously private data.

dsr_ 3 minutes ago | parent [-]

Most services have privacy policies that boil down to:

- we promise not to share PII (defined as narrowly as possible)

- we promise not to share payment information except with our payment system

- if you pay us, we promise not to train LLMs on your data

- you agree that everything else can be used for any business purpose, including marketing, intelligence gathering, and "sharing with our 1735 trusted partners".

rpaddock 22 minutes ago | parent | prev | next [-]

Some companies want no records at all, see:

"2028 – A Dystopian Story By Jack Ganssle":

http://www.ganssle.com/articles/2028adystopianstory.htm

Known as ’The Rule of 26’, which is sometimes given as a reason NOT to keep engineering notebooks etc. By Federal Rule 26 you are guilty if you did not volunteer the records before they are requested. Including any backups.

From Cornel Law:

LII Federal Rules of Civil Procedure Rule 26. Duty to Disclose; General Provisions Governing Discovery

Rule 26. Duty to Disclose; General Provisions Governing Discovery

(a) Required Disclosures.

(1) Initial Disclosure.

(A) In General. Except as exempted by Rule 26(a)(1)(B) or as otherwise stipulated or ordered by the court, a party must, without awaiting a discovery request, provide to the other parties:

(i) the name and, if known, the address and telephone number of each individual likely to have discoverable information—along with the subjects of that information—that the disclosing party may use to support its claims or defenses, unless the use would be solely for impeachment;

(ii) a copy—or a description by category and location—of all documents, electronically stored information, and tangible things that the disclosing party has in its possession, custody, or control and may use to support its claims or defenses, unless the use would be solely for impeachment; …

https://www.law.cornell.edu/rules/frcp/rule_26

next_xibalba 19 minutes ago | parent [-]

See also the OpenAI vs. Musk trial, where Greg Brockman's diary and Sam Altman's texts have taken center stage.

vintagedave an hour ago | parent | prev | next [-]

Paywall: can anyone share what the issue is?

Inaccuracy in meeting minutes?

Leaking private info, re security of notes?

I have never used them (don't trust them to accurately capture what is important in a meeting vs just noting what's mentioned), but the concept seems very useful to me.

WillAdams 40 minutes ago | parent | next [-]

Reminds me of when I worked for a small shop which had the copier maintenance contract at a local college --- when something went wrong and wasn't properly addressed, my bosses found themselves being held to account with their own words from prior phone calls being quoted back to them verbatim --- which they were mystified by until I explained that the administrators had all come up from the clerical pool and knew shorthand.

close04 an hour ago | parent | prev [-]

It's in the viewable text on the page.

> A trendy productivity hack, A.I. note takers are capturing every joke and offhand comment in many meetings. They could also potentially waive attorney-client privilege.

By now everyone knows that AI notes that aren't curated by a human will catch every silly thing that was said in the meeting while omitting the context of the tone or body language. Something as simple as "yeah, right" has vastly different meanings depending on how it was said. In a different context it's already been established that using AI breaks client attorney privilege [0] and this concern has been raised before by law firms [1][2] or the American Bar Association [3] (you can just hit escape before the paywall loads to see the full content). A judge will have to weigh in on this one too.

I don't know what's with the wave of paywalled articles that keep making it to the front page without any workaround included in the submission. Even when you coax the text out of the page source, they're not very insightful to begin with.

[0] https://perkinscoie.com/insights/update/federal-court-rules-...

[1] https://www.smithlaw.com/newsroom/publications/the-silent-gu...

[2] https://natlawreview.com/article/when-ai-takes-notes-protect...

[3] https://www.americanbar.org/groups/gpsolo/resources/ereport/...

pjc50 an hour ago | parent [-]

People opt in to the panopticon and then discover they have no more secrets. I'm surprised lawyers fall for that as well.

lukewarm707 32 minutes ago | parent [-]

the doofus lawyer probably didn't realise, i wouldn't call it opt in

sandworm101 37 minutes ago | parent | prev [-]

>> Executives and corporate boards generally expect conversations with their legal team about legal matters to have attorney-client privilege. They lose that protection if they share the same information with outside parties — and it’s possible that an A.I. note taker could have the same effect.

Total oversimplification. The fact is the privilege is a rule totally in the hands of the court. Every time a new communications technology come up, someone shouts about privilege but the courts still accept it. (Telephones, cell phones, emails, IMs, zoom court, each have had their day in the A-C privilege debate and been accepted.) What matters is that the parties intended and expected communications to be privileged.

As an example. I had a crim law prof who had been a NYC public defender in the 70s/80s. She had regularly interviewed clients at Rikers Island. All interviews were listened to by guards and she said you could even pay to get a copy of the recording. But these interviews were still covered by attorney-client privilege. No court would allow such evidence, but that doesn't mean that the prison could not use it for jail safety. Why does this matter: Because the presence of a third party doesn't mean anything. This isn't magic. An eavesdropper does not nullify the spell. Whether something is or is not privileged depends on the rules followed in the local jurisdiction, and no jurisdiction has ever followed a simplistic "presence of a third part" rule.

Until someone demonstrates an example of an AI actually leaking privileged information, courts are going to chalk it up as just another electronic tool for recording communications.