| ▲ | kelnos 3 hours ago | |||||||
How have you evaluated the error rate? It's unreasonable to expect that these systems will not commit any errors at all. Have any errors resulted in adverse patient outcomes? Also consider that these aren't usually just transcription services. They also interpret what the doctor and patient are saying. Presumably they also offer summaries as well. Unless the doctor immediately reviews the transcript, interpretation, and summary after each visit, and manually corrects any inconsistencies, these sorts of things will just go unnoticed, with incorrect things being a part of a person's permanent medical history. See a comment below[0] where a joke made by the patient about "doing coke" (as in coca-cola) was interpreted by the AI as "the patient used cocaine recently". That sort of error has horrifying implications. If the doctor didn't catch that, I imagine that note could have all sorts of negative consequences for the patient, including insurance rejections and possible legal action if any of this data leaks. And it's funny that you say that patients feel more comfortable and like the doctor connects with them more: after people (both patients and doctors) figure out this weakness of these systems, they will have to start self-censoring and speaking in an impersonal, neutral way in order to avoid mistakes like the above. | ||||||||
| ▲ | burnte 3 hours ago | parent [-] | |||||||
I have, it's a metric I check in on ever month with my providers. It's a few percent, and the exact reason our official policy requires all users (including providers) to check AI output for accuracy. It's heavily enforced by our CMO. We teach our people to think of it like a scribe, and just like with a scribe you need to check it because you're legally on the hook. | ||||||||
| ||||||||