Remix.run Logo
dncornholio 3 days ago

Using a LLM for medical research is just as dangerous as Googling it. Always ask your doctors!

el_benhameen 3 days ago | parent | next [-]

I don’t disagree that you should use your doctor as your primary source for medical decision making, but I also think this is kind of an unrealistic take. I should also say that I’m not an AI hype bro. I think we’re a long ways off from true functional AGI and robot doctors.

I have good insurance and have a primary care doctor with whom I have good rapport. But I can’t talk to her every time I have a medical question—it can take weeks to just get a phone call! If I manage to get an appointment, it’s a 15 minute slot, and I have to try to remember all of the relevant info as we speed through possible diagnoses.

Using an llm not for diagnosis but to shape my knowledge means that my questions are better and more pointed, and I have a baseline understanding of the terminology. They’ll steer you wrong on the fine points, but they’ll also steer you _right_ on the general stuff in a way that Dr. Google doesn’t.

One other anecdote. My daughter went to the ER earlier this year with some concerning symptoms. The first panel of doctors dismissed it as normal childhood stuff and sent her home. It took 24 hours, a second visit, and an ambulance ride to a children’s hospital to get to the real cause. Meanwhile, I gave a comprehensive description of her symptoms and history to an llm to try to get a handle on what I should be asking the doctors, and it gave me some possible diagnoses—including a very rare one that turned out to be the cause. (Kid is doing great now). I’m still gonna take my kids to the doctor when they’re sick, of course, but I’m also going to use whatever tools I can to get a better sense of how to manage our health and how to interact with the medical system.

parpfish 2 days ago | parent | next [-]

I always thought “ask your doctor” was included for liability reasons and not a thing that people actually could do.

I also have good insurance and a PCP. The idea that I could call them up just to ask “should I start doing this new exercise” or “how much aspirin for this sprained ankle?” is completely divorced from reality.

el_benhameen 2 days ago | parent | next [-]

Yes, exactly this. I am an anxious, detail-focused person. I could call or message for every health-related question that comes to mind, but that would not be a good use of anyone’s time. My doctor is great, but she does not care about the minutiae of my health like I do, nor do I expect her to.

rkomorn 2 days ago | parent | prev [-]

I think "ask your doctor" is for prescription meds since only said doctor can write prescriptions.

And "your doctor" is actually "any doctor that is willing to write you a prescription for our medicine".

parpfish 2 days ago | parent [-]

"ask your doctor" is more widespread than tthat. if you look up any diet or exercise advice, there's always an "ask your doctor before starting any new exercise program".

i'm not going to call my doctor to ask "is it okay if I try doing kettlebell squats?"

rkomorn 2 days ago | parent [-]

Yes, I totally got out of context and said something a bit senseless.

But also, maybe calling your doctor would be wise (eg if you have back problems) before you start doing kettlebell squats.

I'd say that the audience for a lot of health related content skews towards people who should probably be seeing a doctor anyway.

The cynic in me also thinks some of the "ask your doctor" statements are just slapped on to artificially give credence to whatever the article is talking about (eg "this is serious exercise/diet/etc).

Edit: I guess what I meant is: I don't think it's just "liability", but genuine advice/best practice/wisdom for a sizable chunk of audiences.

lurking_swe 2 days ago | parent | prev | next [-]

I live in the U.S. and my doctor is very responsive on MyChart. A few times a year i’ll send a message and I almost always get a reply within a day! From my PCP directly, or from her assistant.

I’d encourage you to find another doctor.

el_benhameen 2 days ago | parent [-]

My doctor is usually pretty good at responding to messages too, but there’s still a difference between a high-certainty/high-latency reply and a medium-certainty/low-latency reply. With the llm I can ask quick follow ups or provide clarification in a way that allows me to narrow in on a solution without feeling like I’m wasting someone else’s time. But yes, if it’s bleeding, hurting, or growing, I’m definitely going to the real person.

shrx 2 days ago | parent | prev [-]

> it can take weeks to just get a phone call

> If I manage to get an appointment, it’s a 15 minute slot

I'm sorry that this is what "good insurance" gets you.

lurking_swe 2 days ago | parent [-]

no, that’s what happens when you pick a busy doctor or a practice that’s overbooked in general. All too common these days! :(

This probably varies by locale. For example my doctor responds within 1 day on MyChart for quick questions. I can set up an in person or video appointment with her within a week, easily booked on MyChart as well.

yojo 3 days ago | parent | prev | next [-]

This is the terrifying part: doctors do this too! I have an MD friend that told me she uses ChatGPT to retrieve dosing info. I asked her to please, please not do that.

ozgrakkurt 3 days ago | parent | next [-]

Find good doctors. A solution doesn’t have to be perfect. A doctor doing better than regular joe with a computer is much higher as you can see in research around this topic

SequoiaHope 2 days ago | parent [-]

I have noticed that my doctor is getting busier and busier lately. I worry that cost cutting will have doctors so frantic that they are forced to rely on things like ChatGPT, and “find good doctors” will be an option only for an elite few.

nsriv 3 days ago | parent | prev [-]

I have a hunch that the whole "chat" interface is a brilliant but somewhat unintentional product design choice that has created this faux trust in LLM's to give back accurate information that others can get from drugs.com or Medline with a text search. This is a terrifying example, and please get her to test it out by second guessing the LLM and watching it flip flop.

wtbdbrrr 2 days ago | parent | prev | next [-]

your doctor can have a bad day. and or be an asshole.

In 40 years, only one of my doctors had the decency to correct his mistake after I pointed it out.

He prescribed the wrong Antibiotics, which I only knew because I did something dumb and wondered if the prescribed antibiotics cover a specific strain, which they didn't, which I knew because I asked an LLM and then superficially double-checked via trustworthy official, government sources.

He then prescribed the correct antibiotics. In all other cases where I pointed out a mistake, back in the day researched without LLMs, doctors justified their logic, sometimes siding with a colleague or "the team" before evaluating the facts themselves, instead of having an independent opinion, which, AFAIK, especially in a field like medicine, is _absolutely_ imperative.

djrj477dhsnv 3 days ago | parent | prev | next [-]

I disagree. I'd wager that state of the art LLMs can beat out of the average doctor at diagnosis given a detailed list of symptoms, especially for conditions the doctor doesn't see on a regular basis.

rafterydj 3 days ago | parent | next [-]

"Given a detailed list of symptoms" is sure holding a lot of weight in that statement. There's way too much information that doctors tacitly understand from interactions with patients that you really cannot rely on those patients supplying in a "detailed list". Could it diagnose correctly, some of the time? Sure. But the false positive rate would be huge given LLMs suggestible nature. See the half dozen news stories covering AI induced psychosis for reference.

Regardless, it's diagnostic capability is distinct from the dangers it presents, which is what the parent comment was mentioning.

nsriv 3 days ago | parent | prev [-]

What you're describing, especially with the amount of water "given a detailed list of symptoms" is carrying, is essentially a compute-intensive flowchart with no concept of diagnostic parsimony.

yujzgzc 3 days ago | parent | prev | next [-]

Plot twist, your doctor is looking it up on WebMD themselves

jrm4 2 days ago | parent | prev | next [-]

Almost certainly more I would think, precisely because of magnitude errors.

The ol' "What weighs more, a pound of feathers or two pounds of bricks" trick explains this perfectly to me.

gmac 3 days ago | parent | prev [-]

Not really: it's arguably quite a lot worse. Because you can judge the trustworthiness of the source when you follow a link from Google (e.g. I will place quite a lot of faith in pages at an .nhs.uk URL), but nobody knows exactly how that specific LLM response got generated.

naasking 2 days ago | parent [-]

Many of the big LLMs do RAG and will provide links to sources, eg. Bing/ChatGPT, Gemini Pro 2.5, etc.