Remix.run Logo
sonofhans 8 hours ago

I’ve been in tech and medicine too. Consider that any “HUGE” effect in this context is likely exaggerated, especially for something as prosaic as a note-taking assistant.

As a patient sitting with a doctor, I don’t care how standardized the notes are. I don’t care about anyone’s NPS score. I do want the doctor to connect with me, but I also remember not too long ago when doctors did this anyway, without any assistance from robots.

nitwit005 6 hours ago | parent | next [-]

If there is a large effect, I'd expect it's excitement about a new thing.

Positive survey feedback certainly isn't a bad sign, but people can get very excited about cool new technologies, even ones that ultimately fail.

reaperducer 8 hours ago | parent | prev | next [-]

I also remember not too long ago when doctors did this anyway, without any assistance from robots.

Or with assistance from other humans.

The last time I had surgery, every time I met with the surgeon (about six times), he had an intern following him around with a Thinkpad, typing in everything said.

The intern has the ability to understand context, idiomatic expressions, emotion, and a dozen other important and useful things that an AI transcription will never capture.

dpark 7 hours ago | parent [-]

That’s probably not an intern. Doctors with enough pull can get dedicated scribes like this, but they aren’t cheap, which is why most doctors don’t get them.

burnte 4 hours ago | parent | prev [-]

> I’ve been in tech and medicine too. Consider that any “HUGE” effect in this context is likely exaggerated, especially for something as prosaic as a note-taking assistant.

Imagine your doctor head down writing down everything you say. Now imagine your doctor looking you in the eye and listening intently. Which do you think feels better to the patient? That is "huge". Anything that helps improve patient care with little effort and cost IS HUGE to us. That feeling of the doctor being present and invested helps patient outcomes. THAT is also huge, even if it's a few percent.

We're healing people, we're not looking for a unicorn startup, a few percent improvement IS HUGE to us.

> As a patient sitting with a doctor, I don’t care how standardized the notes are.

Yes you do, better notes mean better care because the next time your seen your records are clean, understandable, and compliant with regulations and best practices. Better notes mean doctors are following protocols. Better notes mean fewer claim rejections, and fewer claim rejections means less money wasted arguing with insurance companies. Better notes mean the data is more easily used for research, as well, which leads to new treatments and better outcomes.

> I don’t care about anyone’s NPS score.

Ever had a doctor with a bad bedside manner? Missed a diagnosis? Skips appointments on fridays? Tracking NPS scores can help with that. Every data point is useful, and patient satisfaction is massive.

> I do want the doctor to connect with me,

Ok, well, most people DO want this, most people DO want to have a good relationship with their doctor where they feel heard and cared about rather than just another widget on a conveyor belt.

> but I also remember not too long ago when doctors did this anyway, without any assistance from robots.

I also remember when doctors weren't constantly overruled by insurance companies. Ever heard of a Prior Auth? That's when your doctor writes a prescription or an order and then the insurance company makes the doctor call them back and say "yes, I did this on purpose, yes the patient really needs this." Then a bureaucrat at the insurance company will decide if the doctor is right or not. Usually those bureaucrats aren't even doctors. That's illegal, but happens every day.

Anything I can do to help my doctors provide better care for our patients, I'll do. I've dealt with scribes for 12 years and I genuinely think these AI scribes are a genuinely amazing use of the technology. We don't have to hire human scribes, and our doctors are freed up to deal with the patient thanks to a documentation helper.

I evaluated quite a number of these tools before we rolled any out. I've been researching these for two years. Dragon with Copilot is not a good tool, for example. There was another we evaluated, I just did a search on them and their story today is wildly different than it was 18 months ago when I discovered they were lying through their teeth about the tech. I see they claim to have secured a $70m round in 2024 (which I know is a lie) and more since, so maybe they can actually do what they say now but I couldn't trust them, so I kept evaluating.

I'm not an AI truster, AI isn't a panacea, but it DOES have uses, and this is one I've seen make a positive difference. I'm not an insurer, I work for providers, my goal is helping my docs provide the best care, so I promise I'm not going to roll out bullshit tech or things that would endanger our patients. My reputation is on the line, and I take that incredibly seriously too.