Remix.run Logo
rich_sasha 3 days ago

Is that so ironic? Think of humans in factories fishing out faulty items, where formerly they would perhaps be the artisans that made the product in the first place.

latexr 3 days ago | parent | next [-]

The difference is that in the factory case the faulty items are outliers and easy to spot. You throw it away and let the machine carry on making another copy. You barely lost any time and in the end are still faster than artisans, which are never in the loop.

In the AI case, you’re not making the same thing over and over, so it’s more difficult to spot problems and when they happen you have to manually find and fix them, likely throwing everything away and starting from scratch. So in the end all the time and effort put into the machine was wasted and you would’ve been better going with the artisan (which you still need) in the first place.

mc32 3 days ago | parent | next [-]

A factory produces physical products and “AI” produces intellectual products. One is a little fuzzier than the other.

ffsm8 3 days ago | parent | prev [-]

I don't think you've ever talked with someone in manufacturing that's in any way aware how quality assurance works there...

I can understand how you might have that misunderstanding, but just think about it a little, what kind of minor changes can result in catastrophic failures

Producing physical objects to spec and doing quality assurance for that spec is way harder then you think.

Some errors are easy to spot for sure, but that's literally the same for AI generated slop

GoatInGrey 3 days ago | parent | next [-]

I spent five years working in quality assurance in the manufacturing industry. Both on the plant floor and in labs, and the other user is largely correct in the spirit of their message. You are right that it's not just up to things being easy to spot, but that's why there are multiple layers of QA in manufacturing. It's far more intensive than even traditional software QA.

You are performing manual validation of outputs multiple times before manufacturing runs, and performing manual checks every 0.5-2 hours throughout the run. QA then performs their own checks every two hours, including validation that line operators have been performing their checks as required. This is in addition to line staff who have their eyes on the product to catch obvious issues as they process them.

Any defect that is found marks all product palleted since the last successful check as suspect. Suspect product is then subjected to distributed sampling to gauge the potential scope of the defect. If the defect appears to be present in that palleted product AND distributed throughout, it all gets marked for rework.

This is all done when making a single SKU.

In the case of AI, let's say AI programming, not only are we not performing this level of oversight and validation on that output, but the output isn't even the same SKU! It's making a new one-of-a-kind SKU every time, without the pre and post quality checks common in manufacturing.

AI proponents follow a methodology of not checking at all (i.e. spec-driven development) or only sampling every tenth, twentieth, or hundredth SKU rolling off the analogous assembly line.

dimitri-vs 3 days ago | parent [-]

In the case of AI, it gets even worse when you factor in MCPs - which, to continue your analogy, is like letting random people walk into the factory and adjust the machine parameters at will.

But people won't care until a major correction happens. My guess is that we'll see a string of AI-enabled script kiddies piecing together massive hacks that leak embarrassing or incriminating information (think Celebgate-scale incidents). The attack surface is just so massive - there's never been a better time to be a hacker.

n4r9 3 days ago | parent | prev | next [-]

Yeah, a relative has worked in this area. It's eye-opening just how challenging it can be to test "does this component conform to its spec".

latexr 3 days ago | parent | prev [-]

It depends entirely on what you’re building. The OP mentioned “humans fishing out faulty items” that would otherwise be built by artisans, so clearly we’re not talking complex items requiring extensive tests, but stuff you can quickly find and sort visually.

Either way, the point would stand. You wouldn’t have that factory issue then say “alright boys, dismantle everything, we need to get an artisan to rebuild every single item by hand”.

LightBug1 3 days ago | parent | prev | next [-]

Yes, when AI's whole schtick was that it was supposed to be the greatest and smartest revolution in the last few centuries.

Conclusion: we are not in the age of AI.

rich_sasha 3 days ago | parent | next [-]

Dunno. Mass production was clearly a many-orders-of-magnitude improvement on the artisan model, yet still humans are needed.

We still call it the "industrial revolution".

LightBug1 3 days ago | parent [-]

Fair.

My jury is still out as to whether the current models are proto-AI. Obviously an incredible innovation. I'm just not certain they have the potential to go the whole way.

/layman disclaimer

falcor84 3 days ago | parent [-]

As you say, whether we call it "AI", or "doohickey", it is an incredible innovation. And I don't think that anyone is claiming at the moment that the systems as-is will themselves "go the whole way" - it is a technological advancement, that like all others should inspire practitioners to develop better future systems, that adapt some aspects of it.

Perhaps at some point we will see a self-propelling technological singularity with the AI developing its own successor autonomously, but that's clearly not the current situation.

LightBug1 2 days ago | parent | next [-]

Doohickey is so much more relatable ... I may call LLM's that from now on. Thank you.

bluefirebrand 2 days ago | parent | prev | next [-]

> And I don't think that anyone is claiming at the moment that the systems as-is will themselves "go the whole way"

Dunno but I see plenty of people making exactly this claim every day, even on this site

kmoser 2 days ago | parent | prev [-]

That will never happen. We may approach that state asymptotically but since AI output is stochastic, and humans' goals change over time, humans will always be part of the loop.

falcor84 2 days ago | parent [-]

Whatever the formula for the probability of recursive self-improvement of AI may be, I am unfortunately certain that the fickleness of human goals does not factor into it.

CuriouslyC 3 days ago | parent | prev [-]

I'm a booster, but LLMs are 100% not going to give us true autonomous intelligence, they're incredibly powerful but all the intelligence they display is "hacked," generalization is limited. That being said, people are making a huge mistake with the idea that just because we're not gonna hit AGI in the next few years, that these tools aren't powerful enough to irreversibly transform the the world. They absolutely are, and there's no going back.

hvb2 3 days ago | parent [-]

> That being said, people are making a huge mistake with the idea that just because we're not gonna hit AGI in the next few years

Because that's what we've been promised, not once but many times by many different companies.

So sure, there's a marginal improvement like refactoring tools that do a lot of otherwise manual labor.

1vuio0pswjnm7 2 days ago | parent | prev | next [-]

"Think of humans in factories fishing out faulty items, where formerly they would perhaps be the artisans that made the product in the first place."

But according to this Indian service provider's website, the workers (Indians?) are hired to "clean up" not "fish out" the "faulty items"

Imagine a factory where the majority of items produced are faulty and are easily "fished out". But instead of discarding them,^1 workers have to fix each one

1. The energy costs of production are substantial

raincole 3 days ago | parent | prev [-]

And humans are hired to clean up humans' slop all the time. Especially in software development.