Remix.run Logo
spwa4 7 hours ago

If you read the study, the whole conclusion is much less spectacular than the article. What the article really pushes happened:

patients -> AI -> diagnosis (you know, with a camera, or perhaps a telephone I guess)

What REALLY happened

patients -> nurse/MD -> text description of symptoms -> MD -> question (as in MD asked a relevant diagnostic question, such as "is this the result of a lung infection?", or "what lab test should I do to check if this is a heart condition or an infection?") -> AI -> answer -> 2 MDs (to verify/score)

vs

patients -> nurse/MD -> text description of symptoms -> MD -> question -> (same or other) MD -> answer -> 2 MDs verify/score the answer

Even with that enormous caveat, there's major issues:

1) The AI was NOT attempting to "diagnose" in the doctor House sense. The AI was attempting to follow published diagnostic guidelines as perfectly as possible. A right answer by the AI was the AI following MDs advice, a published process, NOT the AI reasoning it's way to what was wrong with the patient.

2) The MD with AI support was NOT more accurate (better score but NOT statistically significant, hence not) than just the MD by himself. However it was very much a nurse or MD taking the symptoms and an MD pre-digesting the data for to the AI.

3) Diagnoses were correct in the sense that it followed diagnostic standards, as judged afterwards by other MDs. NOT in the sense that it was tested on a patient and actually helped a live patient (in fact there were no patients directly involved in the study at all)

If you think about it in most patients even treating MDs don't know the correct conclusion. They saw the patient come in, they took a course of action (probably wrote at best half of it down), and the situation of the patient changed. And we repeat this cycle until patient goes back out, either vertically or horizontally. Hopefully vertically.

And before you say "let's solve that" keep in mind that a healthy human is only healthy in the sense that their body has the situation under control. Your immune system is fighting 1000 kinds of bacteria, and 10 or so viruses right now, when you're very healthy. There are also problems that developed during your life (scars, ripped and not-perfectly fixed blood vessels, muscle damage, bone cracks, parts of your circulatory system having way too much pressure, wounds, things that you managed to insert through your skin leaking stuff into your body (splinters, insects, parasites, ...), 20 cancers attempting to spread (depends on age, but even a 5 year old will have some of that), food that you really shouldn't have eaten, etc, etc, etc). If you go to the emergency room, the point is not to fix all problems. The point is to get your body out of the worsening cycle.

This immediately calls up the concern that this is from doctor reports. In practice, of course, maybe the AI only performs "better" because a real doctor walked up to the patient and checked something for himself, then didn't write it down.

What you can perhaps claim this study says is that in the right circumstances AIs can perform better at following a MD's instructions under time and other pressure than an actual MD can.

foobiekr 7 hours ago | parent [-]

This. The fact that the ai projects have to spin so hard should be tipping people off. But for some reason it doesn’t.

2ndorderthought 5 hours ago | parent [-]

People only read headlines and offload their critical thinking skills to the companies who are selling them in their next publication. It's sad.