| ▲ | thinkingtoilet 12 hours ago | |||||||
What capabilities? The article says the study found it was entirely correct 31% of the time. | ||||||||
| ▲ | Jweb_Guru 3 hours ago | parent | next [-] | |||||||
This is what really scares me about people using AI. It will confidently hallucinate studies and quotes that have absolutely no basis in reality, and even in your own field you're not going to know whether what it's saying is real or not without following up on absolutely every assertion. But people are happy to completely buy its diagnoses of rare medical conditions based on what, exactly? | ||||||||
| ||||||||
| ▲ | matwood an hour ago | parent | prev | next [-] | |||||||
The study is more positive than the 31% conveys. https://www.ctvnews.ca/health/article/self-diagnosing-with-a... The researchers compared ChatGPT-4 with its earlier 3.5 version and found significant improvements, but not enough. In one example, the chatbot confidently diagnosed a patient’s rash as a reaction to laundry detergent. In reality, it was caused by latex gloves — a key detail missed by the AI, which had been told the patient studied mortuary science and used gloves. ... While the researchers note ChatGPT did not get any of the answers spectacularly wrong, they have some simple advice. “When you do get a response be sure to validate that response,” said Zada. Which should be standard advice in most situations. | ||||||||
| ▲ | ipaddr 3 hours ago | parent | prev [-] | |||||||
Does it say how often doctors are correct as a baseline? | ||||||||