▲ | miki123211 5 days ago | |||||||
> If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else? When an LLM gives you medical advice, it's right x% of the time. When a doctor gives you medical advice, it's right y% of the time. During the last few years, x has gone from 0 to wherever it is now, while y has mostly stayed constant. It is not unimaginable to me that x might (and notice I said might, not will) cross y at some point in the future. The real problem with LLM advice is that it is harder to find a "scapegoat" (particularly for legal purposes) when something goes wrong. | ||||||||
▲ | mrtranscendence 4 days ago | parent | next [-] | |||||||
Microsoft claims that they have an AI setup that outperforms human doctors on diagnosis tasks: https://microsoft.ai/new/the-path-to-medical-superintelligen... "MAI-DxO boosted the diagnostic performance of every model we tested. The best performing setup was MAI-DxO paired with OpenAI’s o3, which correctly solved 85.5% of the NEJM benchmark cases. For comparison, we also evaluated 21 practicing physicians from the US and UK, each with 5-20 years of clinical experience. On the same tasks, these experts achieved a mean accuracy of 20% across completed cases." Of course, AI "doctors" can't do physical examinations and the best performing models cost thousands to run per case. This is also a test of diagnosis, not of treatment. | ||||||||
▲ | randomNumber7 5 days ago | parent | prev [-] | |||||||
If you consider how little time doctors have to look at you (at least in Germanys half broken public health sector) and how little they actually care ... I think x is already higher than y for me. | ||||||||
|