▲ | teleforce 3 days ago | |
> It would be similar to if I claimed that an LLM is an expert doctor, but in my data I've filtered out all of the times it gave incorrect medical advice Sharp eyes, similarly Andrew Ng and his Stanford University team pulled the same trick by having overfitting training to testing ratio for his famous cardiologist-level paper published in Nature Medicine [1]. The training ratio is more than 99% and testing less than 1% which failed AI validation 101. The paper would not stand in most AI conference but being published in Nature Medicine, one of the highest impact factor journal there is and has many citations for AI in healthcare and medicine. [1] Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network: |