| ▲ | visarga 2 days ago | |
An ATM is a reliable machine with a bounded risk - the money inside - while an AI agent could steer your company into bankruptcy and have no liability for it. AI has no skin and depending on application, much higher upper bound for damage. A digit read wrong in a medical transcript, patient dies. > There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way. Managing risks, can't automate it. Every project and task needs a responsibility sink. | ||
| ▲ | ipython 2 days ago | parent | next [-] | |
You can bound risk on ai agents just like an atm. You just can’t rely upon the ai itself to enforce those limits, of course. You need to place limits outside the ai’s reach. But this is already documented best practice. The point about ai not having “skin” (I assume “skin in the game”) is well taken. I say often that “if you’ve assigned an ai agent the ‘a’ in a raci matrix, you’re doing it wrong”. Very important lesson that some company will learn publicly soon enough. | ||
| ▲ | marcus_holmes 2 days ago | parent | prev [-] | |
> Every project and task needs a responsibility sink. I don't disagree, though I'd put it more as "machines cannot take responsibility for decisions, so machines must not have authority to make decisions". But we've all been in meetings where there are too many people in the room, and only one person's opinion really counts. Replacing those other people with an LLM capable of acting on the decision would be a net positive for everyone involved. | ||