Remix.run Logo
cmiles8 9 hours ago

There will be many more things like this and it’s an elephant in the room for the supposed mass replacement of people with AI.

Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.

You can make humans more productive but for the foreseeable future you can’t take the human out of the loop to have an AI implementation that’s not a disaster/lawsuit waiting to happen. That, probably more than anything else, is why companies just aren’t seeing the much promised mass step change in productivity from AI and why so many companies are now saying they see zero ROI from AI efforts.

The lowest hanging fruit will be low value rote repetitive tasks like the whole India offshoring industry, which will be the first to vaporize if AI does start replacing humans. But until companies see success on the lowest of lowest hanging fruit on en-mass labor replacement with AI things higher up on the value chain will remain relatively safe.

PS: Nearly every mass layoff recently citing “AI productivity” hasn’t withstood scrutiny. They all seem to be just poorly performing companies slashing staff after overhiring, which management looking for any excuse other than just admitting that.

kace91 8 hours ago | parent | next [-]

I think this is an even clearer case than usual. With software engineers and office work you don’t have legal limitations on who can perform the work, but they exist for lawyers and doctors for example.

So if this is a tool, the fault lies fully in the user, and if this is treated as “another persons work” then the user knowingly passed the work onto someone not authorized to do it. Both end up in the user being guilty.

epgui 2 hours ago | parent | next [-]

> With software engineers […] you don’t have legal limitations on who can perform the work

While in practice that is true, in theory this is why professional engineering accreditations (I mean like P.Eng., not little certificates) exist. Perhaps we will see a broader professionalization of the profession one day.

jacquesm 6 hours ago | parent | prev | next [-]

> With software engineers and office work you don’t have legal limitations on who can perform the work

Technically true, but if you want the IP to be covered by copyright you better make sure they're not using AI or you'll find out that there are some serious legal limitations in your future when you aim to either pick up investment or sell your IP.

cmiles8 6 hours ago | parent [-]

Correct. The US has already ruled that there are no IP protections for AI generated content

visarga 6 hours ago | parent [-]

An unqualified statement, the user has copyright over the elements they provide. In an image if they make manual edits for example, those are protected. In a modern agentic codebase the code itself is least valuable, what counts more are the specs and tests.

jacquesm 2 hours ago | parent [-]

Good luck with that argument in court.

gortok 5 hours ago | parent | prev [-]

> So if this is a tool, the fault lies fully in the user, and if this is treated as “another persons work” then the user knowingly passed the work onto someone not authorized to do it. Both end up in the user being guilty.

I am particularly against this point of view, because we as a community have long touted how computers can do the job better and faster, and that computers don’t make mistakes. When there are bugs, they’re seen as flaws in the system and rectified, by programmers.

When there are gaps between user expectations and how the software works, it’s our job to manage those gaps and reduce the gap.

In the case of AI, we are somehow, probably because we know it’s non-deterministic, turning that social contract we had developed with users on its head.

Now, that’s just the way it is and it’s up to them to know if the computer is lying to them. We have absolved ourselves of both the technical and the non-technical responsibilities to ensure the computer doesn’t lie to the user, or subverts their expectations, or acts in a way contrary to human logic.

AI may be different to us in that it’s non-deterministic, but that’s all the more reason that we’re responsible to ensure AI adoption aligns to the social contract we created with users. If we can’t do that with AI then it’s up to us to stop chasing endless dollars and be forthright with users that facts are optional when it comes to AI.

chrisjj 5 hours ago | parent [-]

> we as a community have long touted ... that computers don’t make mistakes.

No community I know.

Otherwise, I agree.

klibertp an hour ago | parent [-]

> No community I know.

Everybody in sales in every software company in the world would be part of that community, I think. Some of the devs, too. Software was always marketed (and discussed with normal people) as something that could automate error-prone tasks, thereby eliminating the inevitable mistakes humans make when performing those tasks. Would Excel be the cornerstone of so many businesses if it sometimes gave the wrong value as a sum of a column?

That marketing (and the fact that, indeed, Excel can sum anything users throw at it without making mistakes) worked; now we have 3 generations of users who believe that once a computer "gets it" (ie. the correct software is installed and properly configured), it will perform a task given to it correctly forever. The fact that it's almost true (true in the absence of bugs and no changes to the setup, no updates, no hardware degradation, no space rays flipping important bits, etc.) doesn't help - that preceding parenthetical is hard to understand and often omitted when a developer talks to a non-developer.

We've always had software that wasn't as reliable as Excel - speech recognition and OCR come to mind. But in those cases, the errors are plainly visible - they cannot be "confidently wrong". Now we have LLMs that can be confidently wrong, and a vast number of users trained to think that software is either always right or, when it's wrong, it's immediately noticeable.

I don't think developers should bear the whole responsibility here - I think marketing had a much larger role in shaping users' minds. However, devs not clearly communicating the risks of bugs to users (for fear of scaring potential customers or out of laziness) over decades makes us partly responsible as well.

bookofjoe 7 hours ago | parent | prev | next [-]

>Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.

I remember growing up and always hearing "The computer is down" as an excuse for why things were cancelled/offices closed/buses and trains not running/ad infinitum.

At some point I read a article that pointed out that the reason the computer was down was because a person made a [coding] error: the computer itself was fine.

I've yet to read about how a person who caused the computer to be down was disciplined.

21asdffdsa12 6 hours ago | parent | next [-]

You are running on a outdated model of the world. That one of only discipline keeps people working, keeps them productive, keeps the in line.

We saw how that worked out in Soviet Russia and the culture it gave birth to in its aftermath. Artificially held up discipline by institutions and hierarchies is worthless. It only encourages subversion and thus most of the productivity is wasted on hunting for laziness and updating of ever more intricate behavioral programing rules, which make the organization ever more unable to react fast and decisive.

The only discipline worth a damn is intrinsic. People who want something, want to get somewhere. They need no shepards and prison guards, they need only a support harness, they need resources and people concerned about them. The culture that produces such people is required for things to succeed. Any culture that does not, can not succeed and is basically a parasite to cultures who do.

mrwh 6 hours ago | parent | prev | next [-]

And here perhaps was the greatest mistake the software profession made! Not making ourselves into a real profession, with actual accountability. It was terribly convenient for so long not to have consequences when things went wrong. It's less convenient now.

Gud 6 hours ago | parent | prev [-]

Why does a person need to be disciplined because they made a mistake?

bookofjoe 5 hours ago | parent [-]

A couple years ago I was at the Virginia DMV along with about 50 other people doing DMV things when all of a sudden someone comes out from the back and gets in front of all the service windows and announces "The DMV is now closed for the day due to a computer problem. Please leave now."

Some of the people in that crowd had driven hours to get there.

That's why the person who made the coding error should have been disciplined.

Gud 2 hours ago | parent [-]

How do you determine it was the person who had done the "coding error"s fault?

Sounds to me they are seriously underfunded and you are pointing the blame at an individual in a systemic issue.

amelius 7 hours ago | parent | prev | next [-]

We should have more hygiene when it comes to AI.

Text coming out of an LLM should be in a special codeblock of Unicode, so we can see it is generated by AI.

Failing to do so (or tampering with it) should be considered bad hygiene, and should be treated like a doctor who doesn't wash their hands before surgery.

jacquesm 6 hours ago | parent | next [-]

> Text coming out of an LLM should be in a special codeblock of Unicode, so we can see it is generated by AI.

That's exactly my proposed solution:

https://jacquesmattheij.com/classes-of-originality/

sharpy 7 hours ago | parent | prev | next [-]

What will that accomplish? Does it give license to developers to check in code that they don't understand/trust fully?

Ultimately, people should be responsible for the code they commit, no matter how it was written. If AI generates code that is so bad that it warrants putting up warning sign, it shouldn't be checked in.

everforward 6 hours ago | parent [-]

It could be useful for downstream/AI processes. Eg hand-written code only requires 70% code coverage because the cost for higher coverage is significantly higher, while AI generated code requires 90% coverage because the cost of getting coverage is lower.

Especially if the prompt is attached to the metadata. Then reviewers could note how you could have changed the prompt or potentially point an AI at the bug and ask it to add something to AGENTS.md to prevent that in the future.

chrisjj 5 hours ago | parent | prev [-]

> Text coming out of an LLM should be in a special codeblock of Unicode, so we can see it is generated by AI.

Why not start with manual tagging, like "Ad"?

raincole 8 hours ago | parent | prev | next [-]

I don't believe most countries hold judges accountable for bad ruling at all even before AI era.

"Check and balance, except judiciary."

RobotToaster 7 hours ago | parent | next [-]

In the UK lower court judges are sometimes removed for misconduct.

Only the king (at the petition of parliament) can remove a high court or appeal court judge, and that's only ever happened once, in 1830.

chrisjj 5 hours ago | parent | prev | next [-]

It wasn't just a bad ruling. It was judicial misconduct.

AnimalMuppet 7 hours ago | parent | prev [-]

In the US, local/state judges often are elected (probably varies by state). Federal judges can be impeached.

coldtea an hour ago | parent | prev | next [-]

>Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.

Why? The logic of ever less personal pride, involvement, and care, is eventually to just put the blame on AI and be done with it.

Issues? Casualties? It's a bug, somebody fixes it and we move on. Or is just a cost we need to get used to to live in the great new world of AI.

We're in an era where nobody involved goes to jail for the Epstein case, and the world keeps turning, and we think people will care if nobody goes to jail if somebody loses their pension or gets wrongly imprisoned or dies on an operating table because of AI mistake?

If anything, legal, union and other limitations like that on who gets to decide (having to have a human ultimately responsible) might be torn down, to fullu embrace the blame-shifting capabilities of the digital bureucracy.

WarmWash 7 hours ago | parent | prev | next [-]

>why so many companies are now saying they see zero ROI from AI efforts.

I strongly suspect this is because workers are pocketing the gains for themselves. Report XYZ usually takes a week to write. It now takes a day. The other 4 days are spent looking busy.

The MIT report that found all these companies were getting nowhere with AI, also found that almost every worker was using AI almost daily. But using their personal account rather than the corporate one.

onionisafruit 7 hours ago | parent | next [-]

If that were the case, this site and certain subreddits would have a lot of posts and comments with people crowing about how much time they are getting back. I haven’t seen that, but I haven’t gone looking for it either.

butterbomb 5 hours ago | parent [-]

> people crowing about how much time they are getting back

Trust me, if it wasn’t for RTO, I probably would be lol.

everforward 5 hours ago | parent | prev | next [-]

While not dispositive of your idea, I think some portion of people using their personal accounts is because we collectively lack good feedback loops on the effectiveness of “AI addons” like RAG. The corporate accounts can be legitimately less useful than a “stock” account because the AI team integrates everything under the sun to show value, but the integrations become a net negative.

Ie ones that index entire company wikis. It ends up regurgitating rejected or not implemented RFCs, or docs from someone’s personal workflow that requires setting up a bunch of stuff locally to work, or etc.

A lot of tasks are not dependent on internal documentation, and it just ends up polluting the context with irrelevant, outdated or just wrong information.

adithyassekhar 6 hours ago | parent | prev | next [-]

Quiet the contrary, companies layoff all roles (frontend, backend, qa, devops, even ui/ux) and handle a project to one competent dev. And asks them to deliver it in 1/3rd the time it would have took with a proper team. It's happening at places I know. This thread on reddit is 100% same: https://www.reddit.com/r/developersIndia/s/EIksvB15tm

I can't even imagine the stress from context switching, and since people don't realize this is still work, they do this late into the night as well.

bluGill 5 hours ago | parent | next [-]

Every downturn you see the same thing - mass layoffs blamed on whatever the latest fad is. In the end it was the economy not the fad.

When it isn't the economy the gains are used to build more / better, not get rid of people. (not all fads have real gains, but when they do)

WarmWash 5 hours ago | parent | prev [-]

SWE's are a minority of the white collar workforce.

toraway 5 hours ago | parent | prev | next [-]

That’s certainly a … convenient … explanation.

beachtaxidriver 6 hours ago | parent | prev [-]

Because the amount of ai slop code from peers and the amount of ai slop emails to read from management has exploded.

blackoil 5 hours ago | parent | prev | next [-]

> You can make humans more productive

If productivity is 10x unless work increases 10x jobs will be gone.

kudokatz 5 hours ago | parent [-]

In about 1930, Keynes wrote "Economic Possibilities for our Grandchildren" [1] wherein he wrote:

"I believe that this is a wildly mistaken interpretation of what is happening to us.

We are suffering, not from the rheumatics of old age, but from the growing-pains of over-rapid changes, from the painfulness of readjustment between one economic period and another. The increase of technical efficiency has been taking place faster than we can deal with the problem of labour absorption; the improvement in the standard of life has been a little too quick ...

We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come--namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour."

While there's no guarantee that what Smith got wrong then is the same as now, it can be a reasonable outcome that "the jobs" won't just disappear.

----

Keynes also speculated on what to do with newfound time as a result of investment returns on the back of productivity [1]:

"Let us, for the sake of argument, suppose that a hundred years hence we are all of us, on the average, eight times better off in the economic sense than we are to-day. Assuredly there need be nothing here to surprise us ... Thus for the first time since his creation man will be faced with his real, his permanent problem-how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well."

The modern FIRE movement shows that living at a dated "standard of living" for 10-15 years can free one from work forever. Yet that's not what most people do today. I would suggest that there are deeper aspects of human drive, psychology, and varying concepts of "morality" that are actually bigger factors in what happens to "jobs".

[1] http://www.econ.yale.edu/smith/econ116a/keynes1.pdf

ForHackernews 7 hours ago | parent | prev | next [-]

Counterpoint: No one ever gets fired or goes to jail when big tech firms break the law. Companies will put out an apology, pay whatever small fine is imposed, and continue with illegal AI usage at scale.

ChrisMarshallNY 7 hours ago | parent | prev | next [-]

> Someone has to get fired / go to jail when something screws up.

In law, someone always hangs. I think a number of American lawyers have been sanctioned for using AI slop.

In other vocations ... not so much. I think that one of the reasons that insurance likes AI so much, is that they can say that it was "the computer" that made the decision that killed Little Timmy.

idontwantthis an hour ago | parent | prev | next [-]

I think there was an SMBC comic recently on this subject. Basically a whole responsibility industry crops up. You get paid to be the fall guy for an AI if it ever screws up since Someone needs to be held accountable.

general_reveal 8 hours ago | parent | prev | next [-]

Or, AI is going to be like when land lines became unnecessary when cellphones showed up in India. India may get to skip an entire intellectual generation due to the ability of a cheap model to educate (in any language).

The narrative that an entire population are “worth” less, paid less , know less, live less …

Fuck this less shit, embrace the paradigm shift. God is finally providing the remedial support through the miracle of AI.

jazzypants 7 hours ago | parent | next [-]

We've had YouTube for two decades now. Cheap education was already available for those who wanted it.

jacquesm 6 hours ago | parent | next [-]

Youtube is insanely ineffecient compared to a good AI model in interactive mode.

fc417fc802 2 hours ago | parent | next [-]

Youtube is insanely inefficient even compared to a well written and organized wall of text. I guarantee that archwiki will get me on track faster than watching videos but google's freely available model will give me the exact step by step explanation that I needed nearly every time.

chrisjj 5 hours ago | parent | prev [-]

True. An "AI agent" is >100x as fast at mistakingly wiping C:.

jacquesm 2 hours ago | parent [-]

I don't let AI agents anywhere near my systems.

I meant just interactive as in you talk to it in a browser a-la chatgpt compared to trying to find the same information from videos.

general_reveal 7 hours ago | parent | prev [-]

[dead]

AlotOfReading 6 hours ago | parent | prev | next [-]

I don't know if you've ever been to India, but one of its characteristic features is that it has lots of local languages. LLMs are awful at almost all of them. Plus, there's 20ish% of the population that falls below the literacy threshold. It's hard to imagine how those people would be educated by LLMs even if that was a good idea and they all had reliable Internet access, which they often don't.

blackoil 4 hours ago | parent | next [-]

Your comment raises question, if you have ever been to India. Most of those 20% are old people. K-12 education need to be improved but literacy is not a major problem. Also India has cheapest internet in the world.

general_reveal 6 hours ago | parent | prev [-]

Why’s it hard to imagine? More training data will solve whatever language lapses it has. The next miracle is that TTS is perfect now, so they don’t need to be able to read.

You can convey abstract concepts as alternate abstractions, explain like I’m five but on turbosteroids. It’s the ultimate teaching tool and it’s about to be ubiquitous.

AlotOfReading 5 hours ago | parent [-]

What training data? Many of these languages have very little digitized literature. Even if we assume they have sizeable extant corpuses (e.g. Tibetic/Bhoti), that's not enough. LLMs are still pretty garbage at English prose, for example.

general_reveal 5 hours ago | parent [-]

!Remind me in 1 year (certainly less than 5).

21asdffdsa12 6 hours ago | parent | prev | next [-]

Or you are proven wrong entirely, again and again. And it turns out, that the legacy, unimportant and derelict of the past - culture is all decisive It turns out that only some cultures can generate high trust societies, capable to form institutions. And you prolonged suffering, by declaring that all cultures are created equal. History may write you down a monster.

cindyllm 5 hours ago | parent [-]

[dead]

delaminator 7 hours ago | parent | prev [-]

Some people are worth more than others.

Some cultures are better than others.

delaminator 31 minutes ago | parent [-]

I guess Cannibalism good now, in the eyes of the downvoters.

fidotron 8 hours ago | parent | prev | next [-]

> Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.

The turning point will be when threatening an AI with being unplugged for screwing up works in motivating it to stop making things up.

Some people will rightly point out that is kind of what the training process is already. If we go around this loop enough times it will get there.

Hendrikto 8 hours ago | parent [-]

You are making a lot of assumptions here. You assume, among other things, that AI has self-preservation drive, can be threatened, can be motivated, and above all that we know how to accomplish that and are already doing so. I would dispute all of that.

yes_man 8 hours ago | parent [-]

For now maybe not. (Maybe).

But just as evolution in nature, isn’t it likely that in the future the AIs that have a preservation drive are the ones that survive and proliferate? Seeing they optimize for their survival and proliferation, and not blindly what they were trained on.

I am not discounting this happening already, not by the LLMs necessarily being sentient but at least being intelligent enough to emulate sentience. It’s just that for now, humanity is in control of what AI models are being deployed.

cess11 8 hours ago | parent | next [-]

Is this an expectation you have towards, say, NPC:s in games?

yes_man 6 hours ago | parent [-]

Put an LLM inside the NPCs in an open world RPG full of dangerous enemies. The LLMs that are more prone to emulate self-preservation will be more likely to survive over ones that have a lesser drive.

We should not act surprised if that generalizes to some degree to for example AI agents. Ones that emulate self-preservation might optimize for behavior that results in those models becoming more successful, more popular. And this feedback loop might embed more such properties into future iterations of the models.

adithyassekhar 5 hours ago | parent | prev | next [-]

Claude does this if you keep pestering it about something, it will go from friendly to shooing away you.

8 hours ago | parent | prev [-]
[deleted]
hek2sch 8 hours ago | parent | prev [-]

Isn't just the issue stemming simply from not using the right tool? When the stakes are high and you should be checking details, the right tools are grounded Ai solutions like nouswise and notebooklm and not the general purpose chatbots that almost everyone knows they might hallucinate. I also do believe that this use case is definitely a low hanging fruit to automat a lot of manual work but it comes with new requirements like transparency to help with verifying the responses.

chrisjj 5 hours ago | parent | next [-]

> Isn't just the issue stemming simply from not using the right tool?

What suggests this judge was not using the very best chatbot?

edgarvaldes 8 hours ago | parent | prev [-]

Is this a solved problem using the right tools?

noelsusman 5 hours ago | parent [-]

It would be relatively trivial to build a system that gives an LLM the tools necessary to go through each citation in a legal brief and verify its authenticity, and that's something I think opus-4.6 or gpt-5.3 could complete reliably.