▲ | PeterStuer 4 hours ago | |
Absolute worst by far I have encountered is people using ChatGPT to self diagnose their presumed psychological conditions. Ofc ChatGPT goes in hard to syncopanthically confirm all 'suggestive' leads with zero pushback. | ||
▲ | okdood64 3 hours ago | parent [-] | |
> syncopanthically confirm all 'suggestive' leads with zero pushback This is true. However: As someone who's done multiple assessments in a clinical setting for anxiety & depression, there is no special magic that requires a human to do it, and many providers are happy to confirm a diagnosis pretty quickly without digging in more. There's GAD-7 & PHQ-9 respectively. While the interview is semi-structured and there is some discretion to the interviewer (how the patient presents in terms of affect, mood, etc.) they mostly go off the quiz. The trouble you can run into is if there's another condition or differential diagnosis which could be missed. (By both an LLM and the interviewer alike.) |