Remix.run Logo
Ask HN: Are employers getting the returns from AI?
5 points by daemon_9009 11 hours ago | 6 comments

Hi, most of the companies have now given AI tools to their employees, these include claude code, cursor and github copilot. This is supposed to be the testing period to see how will things turn out by using AI. But now its almost a year, and employers will want to see the return from their investment in AI tools. most devs have still not adopted cloud coding agents for one reason or the other. Layoff number is normal in non-VC funded companies, since they have no pressure to show AI working for employees. But this has to converge somewhere, are returns from AI tools enough to justify the headcount reduction except forcefully showing it to investors?

laxmena 11 hours ago | parent | next [-]

More I use AI tools, stronger I'm convinced that it's a force multiplier. I'm one of the strong advocates for adoption of AI at work.

But I'm also very skeptical about the narrative- that AI will simply replace workers.

The main issue is accountability. If an autonomous agent takes an incorrect action, who takes responsibility?

I recently had a first hand experience at work where an agent, designed to act on customer tickets, was authorized to suspend accounts upon request.

It incorrectly suspended an active, critical account essential to our revenue metrics. Now, the support engineer who deployed that agent is writing the postmortem/CoE.

These are some incidents why I believe AI will not "completely" replace human roles. When systems fail at scale, we still require an accountable human to analyze the failure, accept responsibility.

daemon_9009 10 hours ago | parent [-]

if we think, what is accountability? if a human would do a mistake, you would as an employer do two things: either teach him what not to do, or fire him. Same thing can be done with AI agents, if you decide to stay with the agent then teach it what not to do, this might not work 100% but to a certain level for non critical things it would work. Agents should ofc not be given consequential actions like deleting accounts at will. But the point is, once a mistake is done, it is done, be it human or an agent, you gotta teach or fire something. I don't think any other way to solve this problem

marcelbundle 11 hours ago | parent | prev [-]

>introduce AI to cut down on developers' salaries

>layoff developers

>AI credits are going up

Oh yeah it's all coming together

daemon_9009 10 hours ago | parent | next [-]

> >layoff developers for now we are not sure what is gonna happen. Big tech is surely reducing headcount to invest more in AI. but with recent downtime of github/AWS and them not blaming AI but humans, says a lot about that they will go to any extent to prove AI is bringing value enough to fire human engineers.

marcelbundle 11 hours ago | parent | prev [-]

At this point, you can't convince me that this is not, in fact, some shadow HR big brain play just to keep HR jobs with a constant layoff/rehiring cycle

fragmede 9 hours ago | parent [-]

HR jobs aren't gonna get automated by AI as well? AI is a force multiplier, now the HR department only needs to be half the size it was to support the same number of people.