| ▲ | burnte 3 hours ago | |
I have, it's a metric I check in on ever month with my providers. It's a few percent, and the exact reason our official policy requires all users (including providers) to check AI output for accuracy. It's heavily enforced by our CMO. We teach our people to think of it like a scribe, and just like with a scribe you need to check it because you're legally on the hook. | ||
| ▲ | kelnos 3 hours ago | parent [-] | |
Great, glad to hear that. I'm still concerned, though. I absolutely believe that's indeed your official policy, but people get tired, and people get overworked, and sometimes they'll succumb to the temptation to instead just give it a "quick skim" or not even really review it at all. And the more and more we rely on these systems, the more people will be lulled into a false sense of security about their accuracy. I'm not really sure what the solution is. Policy and process aren't always followed. Sure, tired providers can make mistakes themselves when manually taking notes and updating a chart, but I'm much more comfortable accepting a provider making an honest mistake, over an AI system hallucinating something, or misinterpreting a joke as something serious. One thing I can think of is to give patients direct access to these notes. Not just a printout, but actual access to the system that holds them, so that they can make their own notes to correct any issues, that the provider can incorporate, and if the provider doesn't incorporate them, then the notes remain for anyone to see in the future. But, frankly, I think it is way too early for adoption of AI systems in this sort of critical context. These systems are just not good enough. Even if they're right 99% of the time, that's still not good enough. And they absolutely are not right 99% of the time. (Also just wanted to note here that you replied before I edited my comment to add a bunch of extra stuff, just in case others see this and get the incorrect impression that you've ignored the rest of my comment.) | ||