| ▲ | amluto 4 hours ago |
| Humans can do one thing that AI agents are 100% completely incapable of doing: being accountable for their actions. |
|
| ▲ | jumpconc 3 hours ago | parent | next [-] |
| You haven't met certain humans. Not all humans have internal capacity for accountability. The real meaning of accountability is that you can fire one if you don't like how they work. Good news! You can fire an AI too. |
| |
| ▲ | hun3 3 hours ago | parent | next [-] | | But it's still a bit more difficult to sue them for leaking your company's data. At least for now. | |
| ▲ | pessimizer an hour ago | parent | prev [-] | | Bad news! They will not be aware that you have done this and will not care. | | |
| ▲ | Zak an hour ago | parent [-] | | The purpose of firing a person shouldn't be vengeance but to remove someone who is unreliable or not cost effective. It's similarly reasonable to drop a tool that's unreliable, though I don't think that's a reasonable description here. Instead, they used a tool which is generally known to be unpredictable and failed to sandbox it adequately. | | |
| ▲ | bigstrat2003 an hour ago | parent [-] | | The purpose of firing a person is to remove someone unreliable, but also, the person having that skin in the game makes him behave more reliably. The latter is something you cannot do with an LLM. The cold hard fact is: LLMs are an unreliable tool, and using them without checking their every action is extremely foolish. |
|
|
|
|
| ▲ | grey-area 3 hours ago | parent | prev | next [-] |
| Don’t forget learning, humans can learn, LLMs do not learn, they are trained before use. |
| |
| ▲ | addedGone an hour ago | parent [-] | | They learn on the next update :p | | |
| ▲ | quantummagic an hour ago | parent [-] | | Yup. And eventually there will be online learning, that doesn't require a formal update step. People keep conflating the current implementation, as an inherent feature. |
|
|
|
| ▲ | unyttigfjelltol 3 hours ago | parent | prev | next [-] |
| I disagree. They could fire Claude and their legal counsel could pursue claims (if there were any, idk)-- the accountability model is similar. Anthropic probably promised no particular outcome, but then what employee does? And in the reverse, if a person makes a series of impulsive, damaging decisions, they probably will not be able to accurately explain why they did it, because neither the brain nor physiology are tuned to permit it. Seems pretty much the same to me. |
|
| ▲ | antonvs 3 hours ago | parent | prev | next [-] |
| That’s a feature that other humans impose on whoever’s being held accountable. There’s no reason in principle we couldn’t do the same with agents. |
| |
| ▲ | LPisGood 3 hours ago | parent [-] | | How would you fire an agent? This impacts the company that makes the LLM, but not the agent itself. |
|
|
| ▲ | jeremyccrane 3 hours ago | parent | prev [-] |
| Yep. |