Remix.run Logo
laxmena 12 hours ago

More I use AI tools, stronger I'm convinced that it's a force multiplier. I'm one of the strong advocates for adoption of AI at work.

But I'm also very skeptical about the narrative- that AI will simply replace workers.

The main issue is accountability. If an autonomous agent takes an incorrect action, who takes responsibility?

I recently had a first hand experience at work where an agent, designed to act on customer tickets, was authorized to suspend accounts upon request.

It incorrectly suspended an active, critical account essential to our revenue metrics. Now, the support engineer who deployed that agent is writing the postmortem/CoE.

These are some incidents why I believe AI will not "completely" replace human roles. When systems fail at scale, we still require an accountable human to analyze the failure, accept responsibility.

daemon_9009 11 hours ago | parent [-]

if we think, what is accountability? if a human would do a mistake, you would as an employer do two things: either teach him what not to do, or fire him. Same thing can be done with AI agents, if you decide to stay with the agent then teach it what not to do, this might not work 100% but to a certain level for non critical things it would work. Agents should ofc not be given consequential actions like deleting accounts at will. But the point is, once a mistake is done, it is done, be it human or an agent, you gotta teach or fire something. I don't think any other way to solve this problem