Remix.run Logo
nuancebydefault 11 hours ago

The article discusses basically 2 new problems with using agentic AI:

- When one of the agents does something wrong, a human operator needs to be able to intervene quickly and needs to provide the agent with expert instructions. However since experts do not execute the bare tasks anymore, they forget parts of their expertise quickly. This means the experts need constant training, hence they will have little time left to oversee the agent's work.

- Experts must become managers of agentic systems, a role which they are not familiar with, hence they are not feeling at home in their job. This problem is harder to be determined as a problem by people managers (of the experts) since they don't experience that problem often first hand.

Indeed the irony is that AI provides efficiency gains, which as they become more widely adopted, become more problematic because they outfit the necessary human in the loop.

I think this all means that automation is not taking away everyone's job, as it makes things more complicated and hence humans can still compete.

grvdrm 9 hours ago | parent | next [-]

Your first problem doesn’t feel new at all. Reminded me of a situation several years ago. What was previous Excel report was automated into PowerBI. Great right? Time saved. Etc.

But the report was very wrong for months. Maybe longer. And since it was automated, the instinct to check and validate was gone. And tracking down the problem required extra work that hadn’t been part of the Excel flow

I use this example in all of my automation conversations to remind people to be thoughtful about where and when they automate.

all2 5 hours ago | parent [-]

Thoughtfulness is sometimes increased by touch time. I've seen various examples of this over time; teachers who must collate and calculate grades manually showed improved outcomes for their students, test techs who handle hardware becoming acutely aware of the many failure modes of the hardware, and so on.

asielen 10 hours ago | parent | prev | next [-]

The way you put that makes be think of the current challenge younger generations are having with technology in general. Kids who were raised on touch screen interfaces vs kids in older generations who were raised on computers that required more technical skill to figure out.

In the same way, when everything just works, there will be no difference, but when something goes wrong, the person who learned the skills before will have a distinct advantage.

The question is if AI gets good enough that slowing down occasionally to find a specialist is tenable. It doesn't need to be perfect, it just needs to be predicably not perfect.

Expertw will always be needed, but they may be more like car mechanics, there to fix hopefully rare issues and provide a tune up, rather than building the cars themselves.

jeffreygoesto 10 hours ago | parent [-]

Car mechanics face the same problem today with rare issues. They know the mechanical standard procedures and that they can not track down a problem but only try to flash over an ECU or try swapping it. They also don't admit they are wrong, at least most of the time...

c0balt 4 hours ago | parent [-]

> only try to flash over an ECU or try swapping it.

To be fair, they have wrenches thrown in their way there as many ECUs and other computer-driven components are fairly locked down and undocumented. Especially as the programming software itself is not often freely distributed (only for approved shops/dealers).

delaminator 10 hours ago | parent | prev | next [-]

I used to be a maintenance data analyst in a welding plant welding about 1 million units per month.

I was the only person in the factory who was a qualified welder.

layer8 4 hours ago | parent | prev | next [-]

They also made the point that the less frequent failures become, the more tedious it is for the human operator to check for them, giving the example of AI agents providing verbose plans of what they intend to do that are mostly fine, but will occasionally contain critical failures that the operator is supposed to catch.

DiscourseFan 11 hours ago | parent | prev | next [-]

That's how it tends to go, automation removes some parts of the work but creates more complexity. Sooner or later that will also be automated away, and so on and so forth. AGI evangelists ought to read Marx's Capital.

jennyholzer2 9 hours ago | parent [-]

I seriously doubt that there is even one "AGI evangelist" who has the intellectual capacity to read books written for adult audiences.

bitwize 5 hours ago | parent | next [-]

Marxists have the tendency to think that the Venn diagram of "people who have read and understand Marx" and "Marxists" is a circle. There are plenty of AGI evangelists who are smart enough to read Marx, and many of them probably have. The problem is that, being technolibertarians and that, they think Marx is the enemy.

DiscourseFan 4 hours ago | parent [-]

That seems patently absurd, considering that the debate is not between marxists and non-marxists but accelerationists and orthodox marxists, who are both readers of marx, its just that the former is in alignment with technolibertarianism.

ctoth 8 hours ago | parent | prev [-]

Hi. I am not an evangelist -- I'm quite certain it's going to kill us all! But I would like to think that I'm about the closest thing to an AI booster you might find here, given that I get so much damn utility out of it. I'm interested in reading, I probably read too much! would you like to suggest a book we can discuss next week? I'd be happy to do this with you.

jennyholzer 11 hours ago | parent | prev [-]

[dead]