Remix.run Logo
Huxley1 6 hours ago

I've used ChatGPT to help understand medical records too. It's definitely faster than searching everything on my own, but whether the information is reliable still depends on personal judgment or asking a real doctor. More people are treating it like a doctor or lawyer now, and the more it's used that way, the higher the chance something goes wrong. OpenAI is clearly drawing a line here. You're free to ask questions, but it shouldn't be treated as professional advice, especially when making decisions for others.

StarterPro 3 hours ago | parent | next [-]

You should not be feeding your medical records into ChatGPT.

matwood an hour ago | parent | next [-]

And you shouldn't be using Gmail or Google Search. At some point the benefits outweigh the costs.

eru 35 minutes ago | parent | prev [-]

Why not?

bushbaba 5 hours ago | parent | prev [-]

I’ve found it just as accurate and a better experience than using telehealth or generalist doctors

sarchertech 4 hours ago | parent | next [-]

You can’t possibly have enough data to support that statement.

jjtheblunt 3 hours ago | parent [-]

there was no mention of sample size, though, so the statement might be true for the commenter but not widely applicable, to your point

ryandrake 4 hours ago | parent | prev | next [-]

If you're not a doctor, how do you know it's accurate?

This is the huge problem with using LLMs for this kind of thing. How do you verify that it is better? What is the ground truth you are testing it against?

If you wanted to verify that ChatGPT could do math, you'd ask it 100 math problems and then (importantly) verify its answers with a calculator. How do you verify that ChatGPT can interpret medical information without ground truth to compare it to?

People are just saying, "oh it works" based on gut vibes and not based on actually testing the results.

matwood an hour ago | parent | next [-]

How does anyone know if what the doctor says is accurate? Obviously people should put the most relative weight in their doctor's opinion, but there's a reason people always say to get a second opinion.

Unfortunately because of how the US healthcare system works today people have to become their own doctors and advocates. LLMs are great at surfacing the unknown unknowns, and I think can help people better prepare for the rare 5 minutes they get to speak to an actual doctor.

xp84 3 hours ago | parent | prev | next [-]

I know it’s hard to accept, but it’s got to be weighed against the real-world alternative:

You get about 5-8 minutes of face time with an actual doctor, but have to wait up to weeks to actually get in to see a doctor, except maybe at an urgent care or ER.

Or people used to just play around on WebMD which was even worse since it wasn’t in any way tailored to what the patient’s stated situation is.

There’s the rest of the Internet too. You can also blame AI for this part, but today the Internet in general is even more awash in slop that is just AI-generated static BS. Like it or not, the garbage is there and it will be most of what people find on Google if they couldn’t use a real ChatGPT or similar this way.

Against this backdrop, I’d rather people are asking the flagship models specific questions and getting specific answers that are halfway decent.

Obviously the stuff you glean from the AI sessions needs to be taken to a doctor for validation and treatment, but I think coming into your 5-minute appointment having already had all your dumbest and least-informed ideas and theories shot down by ChatGPT is a big improvement and helps you maximize your time. It’s true the people shouldn’t recklessly attempt to self-treat based on GPT, but the unwise people doing that were just self-treating based off WebMD hunches before.

eru 34 minutes ago | parent | next [-]

> You get about 5-8 minutes of face time with an actual doctor, but have to wait up to weeks to actually get in to see a doctor, except maybe at an urgent care or ER.

This depends heavily on where you are, and on how much money you want to throw at the problem.

ryandrake 2 hours ago | parent | prev [-]

I get what you're saying, and I agree it might be fun to play around with ChatGPT and Wikipedia and YouTube and WebMD to try to guess what that green bump on your arm is, but it's not research--it needs to be treated as entertainment.

When it comes to taking actual real-world action, I would take 5-8 minutes with a real doctor over 5-8 months of browsing the Internet. The doctor has gone to med school, passed the boards, done his residency, and you at least have that as evidence that he might know what he is doing. The Internet offers no such evidence.

I fear that our society in general is quickly entering a very dangerous territory where there's no such thing as expertise, and unaccountable, probabilistic tools and web resources of unknown provenience are seen as just as good as an expert in his field.

gaudystead 2 hours ago | parent [-]

I don't disagree with you, but if I prompted an LLM to ask me questions like a doctor would for a non-invasive assessment, would it ask me better or worse questions than an actual doctor?

I ask (somewhat rhetorically) to get the mind thinking, but I'm legitimately curious whether - just from a verbal survey - whether the AI doctor would ask me about things more directly related to any illness it might suspect, versus a human who might narrow factors down similar to a 90s TV "ghost speaker" type of person; one fishing for matches amongst a fairly large dataset.

lp0_on_fire 3 hours ago | parent | prev [-]

Nobody would use these services for anything important if they _actually understood_ these glorified markov chains are just as likely to confidently assert something false and lie about it when pressed as they are to produce accurate information.

These AI companies have sold a bill of goods but the right people are making money off it so they’ll never be held responsible in a scenarios the one you described.

exitb 2 hours ago | parent | next [-]

Isn't statistical analysis a legitimate tool for helping diagnosis since forever? It's not exactly surprising that a pattern matcher does reasonably well at matching symptoms to diseases.

matwood an hour ago | parent | prev [-]

What is the most likely cause of this set of facts is how diagnostics works. LLMs are tailor made for this type of use case.

gxs 5 hours ago | parent | prev [-]

Ditto for lawyers

The thing that gets me about AI is that people act like most doctors or most lawyers are not … shitty and your odds of running into a below average one are almost 50/50

Doctors these days are more like physicists when most of the time you need a mechanic or engineer. I’ve had plenty of encounters wher I had to insist on an MRI or on specific bloodwork to hone in on the root cause of an ailment where the doctor just chalked it up to diet and exercise

Anything can be misused, including google, but the answer isn’t to take it away from people

Legal/financial advice is so out of reach for most people, the harsh truth is that ChatGPT is better than nothing and anyone who would follow what it says blindly is bound to fuck up those decisions up in some way anyway

On the other hand, if you can leverage it same as any other tool it’s a legitimate force multiplier

The cynic in me thinks this is just being done in the interest of those professions, but that starts to feel a bit tin foil-y