▲ | AI fares better than doctors at predicting deadly complications after surgery(hub.jhu.edu) | |||||||||||||||||||||||||||||||||||||
25 points by Improvement 2 days ago | 20 comments | ||||||||||||||||||||||||||||||||||||||
▲ | estimator7292 a day ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||
I actually think ML models would excel here. Humans are famously bad at estimating and weighing risks and there's really only so much data a single human brain can store and draw conclusions from. Not to mention bias like female patients being chronically under-diagnosed by male doctors. If you fed a mountain of surgery outcome data into an ML model, I imagine it'd be shockingly effective and (hopefully) less biased on sex and race. It'd probably be helpful for initial diagnosis, but I'm less confident in that. Postop risk assessment is mostly straight statistics, and statistical inference is what ML models do. Diagnosis is a bit more subjective and complex, though it is in the same general domain. The real trick is going to be conditioning doctors to not blindly trust the risk assessment model. Though I would hope that it'd be accurate enough for that anyway | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
▲ | catigula 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||
AI seems to explain this better than as framed: >...the body of the article doesn’t describe a panel of physicians making predictions at all. The headline says “AI fares better than doctors,” but the text says the model outperformed “risk scores currently relied upon by doctors,” i.e., standard scoring tools clinicians use—not the judgments of the surgeons on the case or an outside panel. | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
▲ | bitwize 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||
I get the feeling that this is one of those things where you s/AI/statistics/g. Doctors using a predictive statistical model trained on thousands of patients' worth of data faring better than doctors using the seat of their pants makes total sense. | ||||||||||||||||||||||||||||||||||||||
▲ | BrokenCogs 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||
Human doctors have a tendency to underestimate their own complication rate, often because they are too delusional about their own capabilities. I've heard the same doctor say "this has never happened to me in my 20 years of doing surgery" twice, when a complication occurs during a surgical procedure. | ||||||||||||||||||||||||||||||||||||||
▲ | whyandgrowth 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||
The strange thing is that such articles always evoke the idea that AI is replacing humans even in serious work, which is frightening. | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
▲ | dogmatism a day ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||
1) No, machine learning perfoms better than typical "risk scores" such as the RCRI (it was not tested against doctors clinical judgement) 2) Even so...so what? What we don't have is any reliable way to reduce surgical complications when the benefit outweighs the risk when the risk is elevated | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
▲ | more_corn a day ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||
We need better words. This isn’t a chatbot. Most people think ChatGPT == AI Whereas this is a specially trained model tuned to this exact use case. | ||||||||||||||||||||||||||||||||||||||
▲ | a day ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||
[deleted] | ||||||||||||||||||||||||||||||||||||||
▲ | reify 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||
"Fares Better" sounds unscientific and very much like click bait In cases where the numbers suggest that the average treated person "Fares better" than barely over 50% of the control group, or when effects are inconsistent, readers may not interpret the effects as profound. Providing real numbers that are easily understandable, rather than evocative descriptions, allows readers to form their own conclusions about the results. | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
▲ | datavirtue 2 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||
Until we build in the same financial bias... |