| ▲ | butlike 3 days ago |
| How can you double-check the work? Also, what happens when the AI transcription is wrong in a way that would have terminated the employee. You can't fire a model. Finally, who cares about millions saved (while considering the above introduced risk), when trillions are on the line? |
|
| ▲ | PaulRobinson 3 days ago | parent | next [-] |
| Having a human read a summary is way faster than getting them to write it. If they want to edit it, they can. AI today is terrible at replacing humans, but OK at enhancing them. Everyone who gets that is going to find gains - real gains, and fast - and everyone who doesn't, is going to end up spending a lot of money getting into an almost irreversible mistake. |
| |
| ▲ | butlike 3 days ago | parent [-] | | "Reading a summary is faster, so enhancing humans with AI is going to receive boons or busts to the implementer." Now, summary, or original? (Provided the summary is intentionally vague to a fault, for arguments sake on my end). |
|
|
| ▲ | throitallaway 3 days ago | parent | prev [-] |
| I presume they're not using these notes for anything mission or life critical, so anything less than 100% accuracy is OK. |
| |
| ▲ | butlike 3 days ago | parent [-] | | I disagree with the concept of affluvic notes. All notes are intrinsically actionable; it's why they're a note in the first place. Any note has unbounded consequence depending on the action taken from it. | | |
| ▲ | wredcoll 3 days ago | parent [-] | | You're being downvoted, I suspect for being a tad hyperbolic, but I think you are raising a really important point, which is just the ever more gradual of removing a human's ability to disobey the computer system running everything. And the lack of responsibility for following computer instructions. It's a tad far-fetched in this specific scenario, but an AI summary that says something like "cancel the subscription for user xyz" and then someone else takes action on that, and XYZ is the wrong ID, what happens? |
|
|