Remix.run Logo
cpfohl 14 hours ago

Tricky…my son had a really rare congenital issue that no one could solve for a long time. After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.

I’m not saying we should be getting AI advice without a professional, but I’m my case it could have saved my kid a LOT of physical pain.

rafaelmn 14 hours ago | parent | next [-]

>After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.

Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go (even when the answer is factually wrong !), it doesn't have to be obvious leading but just framing the question in terms of mentioning all the symptoms you now know to be relevant in the order that's diagnosable, etc.

Not saying that's the case here, you might have gotten the correct answer first try - but checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history.

el_benhameen 11 hours ago | parent | next [-]

In January my daughter had a pretty scary stomach issue that had us in the ER twice in 24 hours and that ended in surgery (just fine now).

The first ER doc thought it was just a stomach ache, the second thought a stomach ache or maybe appendicitis. Did some ultrasounds, meds, etc. Got sent home with a pat on the head, came back a few hours later, still no answers.

I gave her medical history and all of the data from the ER visits to whatever the current version of ChatGPT was at the time to make sure I wasn’t failing to ask any important questions. I’m not an AI True Believer (tm), but it was clear that the doctors were missing something and I had hit the limit of my Googling abilities.

ChatGPT suggested, among an few other diagnoses, a rare intestinal birth defect that affects about 2% of the population; 2% of affected people become symptomatic during their lifetimes. I kind of filed it away and looked more into the other stuff.

They decided it might be appendicitis and went to operate. When the surgeon called to tell me that it was in fact this very rare condition, she was pretty surprised when I said I’d heard of it.

So, not a one-shot, and not a novel discovery or anything, but an anecdote where I couldn’t have subconsciously guided it to the answer as I didn’t know the answer myself.

jackvalentine 9 hours ago | parent [-]

Malrotation?

We had in our family a “doctors are confused!” experience that ended up being that.

el_benhameen 7 hours ago | parent [-]

Meckel’s diverticulum

schiffern 14 hours ago | parent | prev | next [-]

  >checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history
So... exactly the same behavior as human doctors?
kjellsbells 12 hours ago | parent | next [-]

Exactly the same behavior as a conversation with an idealized human doctor, perhaps. One who isn't tired, bored, stressed, biased, or just overworked.

The value that folks get from chatgpt for medical advice is due in large part to the unhurried pace of the interaction. Didn't get it quite right? No doctor huffing and tapping their keyboard impatiently. Just refine the prompt and try as many times as you like.

For the 80s HNers out there, when I hear people talk about talking with chatgpt, Kate Bush's song A Deeper Understanding comes immediately to mind.

https://en.wikipedia.org/wiki/Deeper_Understanding?wprov=sfl...

inglor_cz 11 hours ago | parent | prev [-]

ChatGPT and similar tools hallucinate and can mislead you.

Human doctors, on the other hand ... can be tired, hungover, thinking about a complicated case ahead of them, nauseous from a bad lunch, undergoing a divorce, alcoholics, depressed...

We humans have a lot of failure modes.

cwnyth 8 hours ago | parent [-]

Human doctors also know how to ask the right follow-up questions.

Aurornis 6 hours ago | parent | prev | next [-]

> Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go

This goes both ways, too. It’s becoming common to see cases where people become convinced they have a condition but doctors and/or tests disagree. They can become progressively better and better at getting ChatGPT to return the diagnosis by refining their prompts and learning what to tell it as well as what to leave out.

Previously we joked about WebMD convincing people they had conditions they did not, but ChatGPT is far more powerful for these people.

mythrwy 9 hours ago | parent | prev [-]

Indeed is is very easy to lead the LLM to the answer, often without realizing you are doing so.

I had a long ongoing discussion about possible alternate career paths with ChatGPT in several threads. At that point it was well aware of my education and skills, had helped clean up resumes, knew my goals, experience and all that.

So I said maybe I'll look at doing X. "Now you are thinking clearly! This is a really good fit for your skill set! If you want I can provide a checklist.". I'm just tossing around ideas but look, GPT says I can do this and it's a good fit!

After 3 idea pivots I started getting a little suspicious. So I try to think of the thing I am least qualified to do in the world and came up with "Design Women's Dresses". I wrote up all the reasons that might be a good pivot (i.e. Past experience with landscape design and it's the same idea, you reveal certain elements seductively but not all at once, matching color palettes, textures etc). Of course GPT says "Now you are really thinking clearly! You could 100% do this! If you want I can start making a list of what you will need to produce you first custom dresses". It was funny but also a bit alarming.

These tools are great. Don't take them too seriously, you can make them say a lot of things with great conviction. It's mostly just you talking to yourself in my opinion.

terminalshort 12 hours ago | parent | prev | next [-]

Why are you saying we shouldn't get AI advice without a "professional", then? Why is everybody here saying "in my experience it's just as good or better, but we need rules to make people use the worse option"? I have narcolepsy and it took a dozen doctors before they got it right. AI nails the diagnosis. Everybody should be using it.

RobertDeNiro 10 hours ago | parent | next [-]

I wonder if the reason AI is better at these diagnostics, is because the amount of time it spends with the patient is unbounded. Whereas a doctor is always restricted by the amount of time they have with the patient.

pinnochio 4 hours ago | parent [-]

I don't think we can say it's "better" based on a bunch of anecdotes, especially when they're coming exclusively from people who are more intelligent, educated, and AI-literate than most of the population. But it is true that doctors are far more rushed than they used to be, disallowed from providing the attentiveness they'd like or ought to give to each patient. And knowledge and skill vary across doctors.

It's an imperfect situation for sure, but I'd like to see more data.

pinnochio 11 hours ago | parent | prev | next [-]

Survivorship bias.

teitoklien 7 hours ago | parent [-]

Experience working with doctors a few times, and then we’ll see all the bias if one is still surviving lol. Doctors are some of the most corrupt professions who are more focused on selling drugs they get paid commission for to promote, or they obsess over tons and tons of expensive medical tests, that they themselves often know is not needed, except they ask for it, simply out of fear of courts suing them for negligence in future or because again , THEY GET A COMMISSION from the testing agencies for sending them clients.

And even with all of that info, they often come out with the wrong conclusions at times. Doctors do a critically important role in our society and during covid they risked their lives for us, more than anyone else, i do not want to insult or bring down the amount of hard work doctors do for their society.

But worshipping them as holier than thou gods is bullshit, that almost anyone who has spent some time with going back and forth with various doctors over the course of years will come to the conclusion of.

Having an AI assistant doesnt hurt, in terms of medical hints, we need to make having Personal Responsibility popular again, in society’s obsession for making every thing “idiot proof” or “baby proof” we keep losing all sorts of useful and interesting solutions because our politicians have a strong itch to regulate anything and everything they can get their hands on, to leave a mark on society.

pinnochio 6 hours ago | parent [-]

> But worshipping them as holier than thou gods is bullshit

I'd say the same about AI.

teitoklien 5 hours ago | parent [-]

> I'd say the same about AI.

And you’d be right, so society should let people use AI while warning them about all the risks related to it, without banning it or hiding it behind 10,000 lawsuits and making it disappear by coercion.

ares623 11 hours ago | parent | prev | next [-]

How do you hold the AI accountable when it makes a mistake? Can you take away its license "individually"?

terminalshort 10 hours ago | parent | next [-]

I would care about this if doctors were held accountable for their constant mistakes, but they aren't except in extreme cases.

bfLives 8 hours ago | parent | prev | next [-]

Does it matter? I’d rather use a 90% accurate tool than an 80% accurate one that I can subject to retribution.

mensetmanusman 8 hours ago | parent | prev [-]

If it makes a mistake? You’re not required to follow the AI, just use it as a tool for consideration.

ares623 8 hours ago | parent [-]

Doesn't sound very $1 trilliony

buu700 9 hours ago | parent | prev | next [-]

Aside from AI skepticism, I think a lot of it likely comes from low expectations of what the broader population would get out of it. Writing, reading comprehension, critical thinking, and LLM-fu may be skills that come naturally to many of us, but at the same time many others who "do their own research" also fall into rabbit holes and arrive at wacky conclusions like flat-Eartherism.

I don't agree with the idea that "we need rules to make people use the worse option" — contrary to prevailing political opinion, I believe people should be free to make their own mistakes — but I wouldn't necessarily rush to advocate that everyone start using current-gen AI for important research either. It's easy to imagine that an average user might lead the AI toward a preconceived false conclusion or latch onto one particular low-probability possibility presented by the AI, badger it into affirming a specific answer while grinding down its context window, and then accept that answer uncritically while unknowingly neglecting or exacerbating a serious medical or legal issue.

cpfohl 8 hours ago | parent | prev [-]

I’m saying that is a great tool for people who can see through the idiotic nonsense they so often make up. A professional _has_ the context to see through it.

It should empower and enable informed decisions not make them.

tencentshill 14 hours ago | parent | prev | next [-]

We are all obligated to hoard as many offline AI models as possible if the larger ones are legally restricted like this.

SlavikCA 9 hours ago | parent [-]

Google released MedGemma model: "optimized for medical text and image comprehension".

I use it. Found it to be helpful.

tamimio 10 hours ago | parent | prev | next [-]

That's the experience of a lot of people I know or read their stories online, but it isn't about AI bad diagnosis, it's because they know in 5 years doctors and lawyers will be burger flippers, and as a result people won't be motivated to go into any of these fields. In Canada, the process to be a doctor is extremely complicated and hard only to keep it as some sort of private community that only the very few can become doctors, all to keep the wages abysmally high, and as a result, you end up waiting long times for appointments, and the doctors themselves are overwhelmed too. Messed up system that you better pray you never have to become its victim.

In my opinion, AI should do both legal and medical work, keep some humans for decision making, and the rest of the doctors to be surgeons instead.

throwaway290 14 hours ago | parent | prev [-]

this is fresh news right? a friend just used chatgpt for medical advice last week (stuffed his wound with antibiotics after motorbike crash). are you saying you completely treated the congenital issue in this timeframe?

cj 14 hours ago | parent [-]

He’s simply saying that ChatGPT was able to point them in the right direction after 1 chat exchange, compared to doctors who couldn’t for a long time.

Edit: Not saying this is the case for the person above, but one thing that might bias these observations is ChatGPT’s memory features.

If you have a chat about the condition after it’s diagnosed, you can’t use the same ChatGPT account to test whether it could have diagnosed the same thing (since the chatGPT account now knows the son has a specific condition).

The memory features are awesome but also sucks at the same time. I feel myself getting stuck in a personalized bubble even more so than Google.

ninininino 14 hours ago | parent | next [-]

You can just use the wipe memory feature or if you don't trust that, then start a new account (new login creds), if you don't trust that then get a new device, cell provider/wifi, credit card, I.P, login creds etc.

cj 12 hours ago | parent [-]

Or start a “temporary” chat.

throwaway290 9 hours ago | parent | prev [-]

> He’s simply saying that ChatGPT was able to point them in the right direction after 1 chat exchange, compared to doctors who couldn’t for a long time

He literally wrote that. I asked how he knows it's the right direction.

it must be treatment worked. otherwise it is more or less just a hunch

people go "oh yep that's definitely it" too easily. it is the problem with self diagnosing. And you didn't even notice it happened...

without more info this is not evidence.