Remix.run Logo
unstyledcontent 7 hours ago

I have had some incredible medical advice from ChatGPT. It has saved me from small mystery issues, like a rash on my face. Small enough issues that I probably wouldn't have bothered to go into a doctor. BUT it also failed to diagnose me with a medical issue that ended up with a trip to the ER and emergency surgery.

A few weeks before the ER, I was having stomach pain. I went to the doctor with theories from ChatGPT in hand, they checked me for those things and then didn't check me for what ended up being a pretty obvious issue. What's interesting is that I mentioned to the doctor that I used ChatGPT and that the doctor even seemed to value that opinion and did not consider other options (and what it ultimately ended up being was rare but really obvious in retrospect, I think most doctors would have checked for it). I do feel I actually biased the first doctors opinion with my "research."

hwillis 7 hours ago | parent | next [-]

> I do feel I actually biased the first doctors opinion with my "research."

It may feel easy to say doctors should just consider all the options. But telling them an option is worse than just biasing their thinking; they are going to interpret that as information about your symptoms.

If you feel pain in your abdomen but are only talking about your appendix, they are rightfully going to think the pain is in the region of your appendix. They are not going to treat you like you have kidney pain. How could they? If they have to treat all of your descriptions as all the things that you could be relating them to, then that information is practically useless.

ljm 3 hours ago | parent | next [-]

It sounds strange to me that you would use GPT to start consulting to your doc, as if you suddenly know better than them. You don't want to be doing their job for them.

If I used GPT for my medical issue last year and everybody took my word for it, I would be dead.

QuantumGood 2 hours ago | parent | next [-]

Neither "the worst case would be" nor "everything is a sliding scale" are good single hueristics. There are rarely There are rarely good single hueristics, but implying them tends to color discussions strongly.

kevin_thibedeau 3 hours ago | parent | prev [-]

I've related self-dianoses of minor issues to a doctor, immediately followed up with a proviso that I don't put a lot of credence into non-professional opinions. The doctor was supportive that patient directed investigations had value. There is a threshold where an informed patient can be useful for treatment.

Groxx an hour ago | parent [-]

Yeah, I personally know a couple people where self-research found the correct diagnosis, and I am one of them. We had a fantastic primary, who worked with us quite closely and did a lot of research after we found some new information from him.

Doctors don't know everything and don't have access to everything, they are just quite a lot better than the alternatives in the vast majority of cases, so your default odds are much better following their recommendation than anything else. Training is worth a lot, and everyone also knows it's not perfect, and that's entirely fine.

thfuran 3 hours ago | parent | prev [-]

Any competent doctor is aware that patients are likely to misdescribe things. If you walk in and say your appendix hurts, they absolutely should try to clarify that rather than just assuming you have appendicitis.

Aurornis 6 hours ago | parent | prev | next [-]

> I do feel I actually biased the first doctors opinion with my "research."

This has been a big problem in medicine since the early days of WebMD: Each appointment has a limited time due to the limited supply of doctors and high demand for appointments.

When someone arrives with their own research, the doctor has to make a choice: Do they work with what the patient brought and try to confirm or rule it out, or do they try to walk back their research and start from the beginning?

When doctors appear to disregard the research patients arrive with many patients get very angry. It leads to negative reviews or even formal complaints being filed (usually from encouragement from some Facebook group or TikTok community they were in). There might even be bigger problems if the patient turns out to be correct and the doctor did not embrace the research, which can prompt lawsuits.

So many doctors will err on the side of focusing on patient-provided theories first. Given the finite time available to see each patient (with waiting lists already extending months out in some places) this can crowd out time for getting a big picture discussion through the doctor's own diagnostic process.

When I visit a doctor I try to ground myself to starting with symptoms first and try to avoid biasing toward my thoughts about what it might be. Only if the conversation is going nowhere do I bring out my research, and then only as questions rather than suggestions. This seems to be more helpful than what I did when I was younger, which is research everything for hours and then show up with an idea that I wanted them to confirm or disprove.

bryanlarsen 6 hours ago | parent | next [-]

> Each appointment has a limited time

A doctor is typically scheduled at 6 patients/hour. In that time they also have to chart, walk between rooms, make up time for the other patients that inevitably went over time, et cetera. The doctor you're seeing probably has a goal of only talking to you for 3 minutes.

Aurornis 4 hours ago | parent [-]

> A doctor is typically scheduled at 6 patients/hour.

This is untrue. General practice physicians are usually at 3 patients per hour. Some specialists can get in the range or 5 or more per hour if assistants handle most of the prep and work.

The average across all specialties is around 3, though.

> In that time they also have to chart, walk between rooms, make up time for the other patients that inevitably went over time, et cetera. The doctor you're seeing probably has a goal of only talking to you for 3 minutes.

I've been through two different medical systems due to job changes/moving. Both of them gave me the option of a 20 minute or 40 minute appointment slot, with the latter requiring some pre-screening to be approved by the staff. I got the time every time I went.

If your doctor is only giving you 3 minutes you need to find a new one.

Calavar 3 hours ago | parent [-]

I know you qualified your assertion of three patients an hour with general practice, but there are plenty of specialty practices where six patients an hour is common. Dermatology and ophthalmology clinics often run at that pace (at least in the US). Some surgical clinics can run at that pace for follow up visits (not for initial visits)

Aurornis 3 hours ago | parent [-]

That's exactly what I said in my 3rd sentence.

bandrami 5 hours ago | parent | prev | next [-]

I'm annoyed enough by coworkers asking "is the server down?" that I try not to do the equivalent to other people at their jobs, particularly doctors.

tokai 6 hours ago | parent | prev [-]

My aunt died from this (my opinion). She spend two years confusion her diagnosis and treatment, and borderline harassing her doctors, by thinking her own research was on point and interpreting all her symptoms through that lens. In the end it wasn't borrelia, parasites, 5G, or any of the other fancies, but just lung cancer that was only diagnosed when it was very well developed.

walletdrainer 4 hours ago | parent [-]

There’s a difference between mental illness and active participation.

People not suffering from mental illness will typically not blame 5G for their health concerns.

ifyoubuildit 4 hours ago | parent [-]

You're a lay person. You know there is a thing out there called 'foo'.

You've read things that compellingly claim that foo causes xyz symptoms. You also know that some people that have obviously palpable disdain for you claim that foo could never cause these symptoms.

You have xyz symptoms. Are you mentally ill if you think that foo could be the cause?

thfuran 3 hours ago | parent [-]

Are the compelling claims from experts in foo or xyz? Is the disdain?

ifyoubuildit 3 hours ago | parent [-]

Both present themselves to you as experts.

SoftTalker 7 hours ago | parent | prev | next [-]

> what it ultimately ended up being was rare but really obvious in retrospect, I think most doctors would have checked for it

I'm not so sure. Doctors are trained to check for the most common things that explain the symptoms. "When you hear hoofbeats, think horses not zebras" is a saying that is often heard in medicine.

ChatGPT was trained on the same medical textbooks and research papers that doctors are.

giraffe_lady 7 hours ago | parent [-]

> ChatGPT was trained on the same medical textbooks and research papers that doctors are.

Yeah hm I wonder what the difference could possibly be.

boondongle 6 hours ago | parent | prev | next [-]

This is ultimately the same difference between a search engine and a professional. 10 years before this, Googling the symptoms was a thing.

I have a family member who had a "rare but obvious" one but it took 5 doctors to get to the diagnosis. What we really need to see are attempts to blind studies and real statistical rigor. It's funny to paint a tunnel on a canvas and get a Tesla to drive into it, but there's a reason studies (and the more blind the better) are the standard.

BloondAndDoom 6 hours ago | parent | prev | next [-]

The real story hear your doctor actually listened to you. I appreciate what a lot doctors do, but majority of them fucking irritating and don’t even listen your issues, I’m glad we have AI and less reliant on them.

PearlRiver 5 hours ago | parent [-]

It is not a doctor job to listen, smile or be nice. Their job is to fix you.

boondongle 5 hours ago | parent [-]

I mean - obviously if they're not listening their chance of the latter is pretty low.

Doctors hate to hear this, but if you're so poor in communication and social skills that the patient can't/won't follow you any care you've given, your value is lost.

bluSCALE4 5 hours ago | parent | prev | next [-]

Personally, I think the value in ChatGPT in health is not that it's right or wrong but that it encourages you to take an active role in your health and more importantly to try things. I've gone through similar issues with ChatGPT where it's convinced me that if A is true, therefore so must B though that may not be the case.

In the future, I think I'll likely review things with ChatGPT and have an opinion and treat the doctor like a ChatGPT session as well--this is opposed to leading the doctor to what I believe I should be doing. I was dismissive about the doctor's advice because it seemed so obvious but more and more, I feel that most of our issues are caused by habitual, daily mistakes--little things that take hold seasonally or over periods of stress that appear like chronic health issues. At least for me.

5 hours ago | parent | prev | next [-]
[deleted]
luke5441 5 hours ago | parent | prev | next [-]

We have the same kind of issue as software engineers. Users come to use with solutions to their problems and want us to implement the solution. At that point the lazy path would be to just do that. If you have bad management, software engineers might even be punished for questioning the customers.

What you want instead is that the users just describe their problem, as unbiased as possible and with enough detail and then let the expert come up with an appropriate solution that solves the problem.

I try to do that as well when going to the doctor.

cmsp12 5 hours ago | parent | prev | next [-]

You should've let the doctor do its job. if he reached a different conclusion then you can tell him what you researched. and he will make a decision having already done his own research without biasing him

soco 7 hours ago | parent | prev [-]

Which is exactly why the AI, at least the ones of today, should never be used beyond the level of (trusted or not) advisor. Yet not only many CxOs and boards, but even certain governments which shall not be named, are stubbornly trying, for cost or whatever other reasons, to throw entire populations (employees or nations) under the AI bus. And I sincerely don't believe anything short of an uprising will be able to stop them. Change my mind.

qalmakka 7 hours ago | parent | next [-]

I agree. AI right now is at a level of "knowledgeable friend", not of "professional with years of real world experience". You'd listen to what your friend has to say, but taking pills after one of their suggestions? Dumb idea. It's great to brainstorm things, but just like your knowledgeable friend that likes reading Wikipedia pages a bit too much you need to really check it's not reaching to conclusions too quickly

asdff 4 hours ago | parent | prev | next [-]

The sad truth is that it is because while we all appreciate hard work and a good job, that isn't what is needed to move forward in the world of business. Creaky leaky products held together under the hood by scotch tape and string are fine. You don't make more money having a better product. A more performant tool. Better benchmarks. End users, aside for writing tools for other engineers, don't care. They really don't. Word 95 probably opens faster than word today.

Management has realized this. Hey I can outsource to bangalore/hyderabad/east europe/ai, get something that barely works, and just market the crap out of it. Look at the sort of companies, products, and services that dominate markets today. These aren't leaders in quality or engineering. They are leaders in marketing. Marketing is what sells. Marketing can sell billions of steaming turds. Nike shoes are pieces of shit but it's marketing that makes the brand and provides all value in the stock. The world doesn't value quality. It values noise and pretty feathers.

simonebrunozzi 7 hours ago | parent | prev [-]

> but even certain governments which shall not be named

Why can't you name them, and give us some context? Is this based on public info, or not?

_dwt 6 hours ago | parent [-]

Not the original commenter, but you may have noticed a wee kerfluffle between a large nation-state's "Secretary of War" and a frontier model provider over whether the model's licensing would permit autonomous lethal weapon systems operated by said - and I cannot emphasize the middle word enough - large _language_ model.