Remix.run Logo
atleastoptimal 3 days ago

Healthcare in the US is already in very poor shape. Thousands die because of waiting for care, misdiagnosis, or inefficiency leading to magnified costs which in turn leads to denied claims because insurers won't cover it. AI is already better at diagnosis than physicians in most cases.

jakelazaroff 3 days ago | parent | next [-]

That's a pretty fantastic claim. Can you provide some links to the body of independent research that backs it up?

whodidntante 2 days ago | parent | next [-]

There is quite a lot of easy to find information on the web that shows that the US spends twice as much per capita than our European peers and we have worse outcomes, not just on average, but worse outcomes comparing similar economic demographics, including wealthy Americans. We spend $5T a year on health care, or a comparative waste of over $2.5T a year.

Was just listening to this on NPR this morning:

https://www.npr.org/sections/shots-health-news/2025/07/08/nx...

The health of U.S. kids has declined significantly since 2007, a new study finds

"What we found is that from 2010 to 2023, kids in the United States were 80% more likely to die" than their peers in these nations

You also do not need the internet to understand what is going on - you just have to interact with our "health" system.

jakelazaroff 2 days ago | parent [-]

Sorry, let me clarify: the fantastic claim is "AI is already better at diagnosis than physicians in most cases."

whodidntante 2 days ago | parent [-]

sorry. I was responding to the part that says that "Healthcare in the US is already in very poor shape."

Is AI better than most physicians for diagnosis ? I doubt it, and I doubt that there have been any real studies as the area is so new and changing.

My personal experience ? I am actually quite impressed, and I am an AI skeptic. I have fed in four complex scenarios that either I or someone close to me was actually going through (radiology reports, blood and other tests, list of symptoms, etc) and got diagnosis and treatment options that were pretty spot on.

Would I say better ? In one case (this was actually for my dog), it really was better in that it came up with the same diagnosis and treatment options, but was much better at providing risks and outcome probabilities than the veterinarian surgeon did, which I then verified after getting a second opinion. My hunch that this was a matter of self interest, not knowledge.

In two other scenarios, it was spot on, and in the fourth case it was almost completely spot on except for one aspect of a surgical procedure that has been updated fairly recently (it was using a slightly more old fashioned way of doing something).

So, I think there is a lot of promise, but I would never rely solely on an AI for medical opinions.

s5300 2 days ago | parent | prev [-]

[dead]

budududuroiu 3 days ago | parent | prev | next [-]

It was just yesterday we were laughing at Gemini recommending smoking during pregnancy

atleastoptimal 3 days ago | parent [-]

Google's hyper-quantized tiny AI summary model isn't reflective of the abilities of the current SOTA models (Gemini Pro 2.5, o3, Opus)

bobmcnamara 3 days ago | parent | prev | next [-]

How does AI evaluate signs today?

atleastoptimal 3 days ago | parent [-]

A process is described here: https://arxiv.org/pdf/2506.22405

>A physician or AI begins with a short case abstract and must iteratively request additional details from a gatekeeper model that reveals findings only when explicitly queried. Performance is assessed not just by diagnostic accuracy but also by the cost of physician visits and tests performed.

apical_dendrite 2 days ago | parent | next [-]

I believe that dataset was built off of cases that were selected for being unusual enough for physicians to submit to the New England Journal of Medicine. The real-world diagnostic accuracy of physicians in these cases was 100% - the hospital figured out a diagnosis and wrote it up. In the real world these cases are solved by a team of human doctors working together, consulting with different specialists. Comparing the model's results to the results of a single human physician - particularly when all the irrelevant details have been stripped away and you're just left with the clean case report - isn't really reflective of how medicine works in practice. They're also not the kind of situations that you as a patient are likely to experience, and your doctor probably sees them rarely if ever.

atleastoptimal 2 days ago | parent [-]

Either way, the AI model performed better than the humans on average, so it would be reasonable to infer that AI would be a net positive collaborator in a team of internists.

sorcerer-mar 3 days ago | parent | prev [-]

Okay you have a point. AI probably would do really well when short case abstracts start walking into clinics.

atleastoptimal 3 days ago | parent [-]

How else would a study scientifically determine the accuracy of an AI model in diagnosis? By testing it on real people before they know how good it is?

rafaelmn 3 days ago | parent [-]

Why not ? Have AI do it then have human doctor do a follow-up/review ? I might not be a fan of this for urgent care but for general visits I wouldn't mind spending a bit extra time if they it was followed by an expert exam.

sorcerer-mar 3 days ago | parent | prev [-]

I will bet $1,000 you don’t work in a clinic and you’re instead spouting press releases as fact here?

atleastoptimal 3 days ago | parent [-]

So you claim that nobody in the US has died due to waiting for care, misdiagnosis, or inefficiency leading to magnified costs which in turn leads to denied claims?

toofy 3 days ago | parent [-]

> So you claim that nobody in the US has died due to waiting for care, misdiagnosis, or inefficiency leading to magnified costs

i don’t thing they made those claims, at all…

atleastoptimal 3 days ago | parent [-]

The implication was that since I didn't work in a clinic I couldn't make any inferences or reference to facts sourced from the news. If that's the standard for truth then there's no point in making claims since anyone could just say "uhhh you're not a doctor so you can't say that" ad infinitum.