| ▲ | ripe 9 hours ago |
| I really like this author's summary of the 1983 Bainbridge paper about industrial automation. I have often wondered how to apply those insights to AI agents, but I was never able to summarize it as well as OP. Bainbridge by itself is a tough paper to read because it's so dense. It's just four pages long and worth following along: https://ckrybus.com/static/papers/Bainbridge_1983_Automatica... For example, see this statement in the paper: "the present generation of automated systems, which are monitored by former manual operators, are riding on their skills, which later generations of operators cannot be expected to have." This summarizes the first irony of automation, which is now familiar to everyone on HN: using AI agents effectively requires an expert programmer, but to build the skills to be an expert programmer, you have to program yourself. It's full of insights like that. Highly recommended! |
|
| ▲ | yannyu 8 hours ago | parent | next [-] |
| I think it's even more pernicious than the paper describes as cultural outputs, art, and writing aren't done to solve a problem, they're expressions that don't have a pure utility purpose. There's no "final form" for these things, and they change constantly, like language. All of these AI outputs are both polluting the commons where they pulled all their training data AND are alienating the creators of these cultural outputs via displacement of labor and payment, which means that general purpose models are starting to run out of contemporary, low-cost training data. So either training data is going to get more expensive because you're going to have to pay creators, or these models will slowly drift away from the contemporary cultural reality. We'll see where it all lands, but it seems clear that this is a circular problem with a time delay, and we're just waiting to see what the downstream effect will be. |
| |
| ▲ | hannasanarion 8 hours ago | parent | next [-] | | > All of these AI outputs are both polluting the commons where they pulled all their training data AND are alienating the creators of these cultural outputs via displacement of labor and payment No dispute on the first part, but I really wish there were numbers available somehow to address the second. Maybe it's my cultural bubble, but it sure feels like the "AI Artpocalypse" isn't coming, in part because of AI backlash in general, but more specifically because people who are willing to pay money for art seem to strongly prefer that their money goes to an artist, not a GPU cluster operator. I think a similar idea might be persisting in AI programming as well, even though it seems like such a perfect use case. Anthropic released an internal survey a few weeks ago that was like, the vast majority, something like 90% of their own workers AI usage, was spent explaining allnd learning about things that already exist, or doing little one-off side projects that otherwise wouldn't have happened at all, because of the overhead, like building little dashboards for a single dataset or something, stuff where the outcome isn't worth the effort of doing it yourself. For everything that actually matters and would be paid for, the premier AI coding company is using people to do it. | | |
| ▲ | kurthr 7 hours ago | parent | next [-] | | I guess I'm in a bubble, because it doesn't feel that way to me. When AI tops the charts (in country music) and digital visual artists have to basically film themselves working to prove that they're actually creating their art, it's already gone pretty far. It feels like the even when people care (and they great mass do not) it creates problems for real artists. Maybe they will shift to some other forms of art that aren't so easily generated, or maybe they'll all just do "clean up" on generated pieces and fake brush sequences. I'd hate for art to become just tracing the outlines of something made by something else. Of course, one could say the same about photography where the art is entirely in choosing the place, time, and exposure. Even that has taken a hit with believable photorealistic generators. Even if you can detect a generator, it spoils the field and creates suspicion rather than wonder. | | | |
| ▲ | clickety_clack 4 hours ago | parent | prev | next [-] | | Art is political more than it is technical. People like Banksy’s art because it’s Banksy, not because he creates accurate images of policemen and girls with balloons. | | |
| ▲ | majormajor 4 hours ago | parent [-] | | I think "cultural" is a better word there than "political." But Banksy wasn't originally Banksy. I would imagine that you'll see some new heavily-AI-using artists pop up and become name brands in the next decade. (One wildcard here could be if the super-wealthy art-speculation bubble ever pops.) Flickr, etc, didn't stop new photographers from having exhibitions and being part of the regular "art world" so I expect the easy availability of slop-level generated images similarly won't change that some people will do it in a way that makes them in-demand and popular at the high end. At the low-to-medium end there are already very few "working artists" because of a steady decline after the spread of recorded media. Advertising is an area where working artists will be hit hard but is also a field where the "serious" art world generally doesn't consider it art in the first place. | | |
| ▲ | irishcoffee 2 hours ago | parent [-] | | > I think "cultural" is a better word there than "political." Oh. What is the difference? |
|
| |
| ▲ | musicale 5 hours ago | parent | prev | next [-] | | > people who are willing to pay money for art seem to strongly prefer that their money goes to an artist, not a GPU cluster operator Businesses which don't want to pay money strongly prefer AI. | | |
| ▲ | sureglymop 5 hours ago | parent | next [-] | | Yeah but if they, for example use AI to do their design or marketing materials then the public seems to dislike that. But again, no numbers that's just how it feels to me. | |
| ▲ | heavyset_go 4 hours ago | parent | prev [-] | | Then they get a product that legally isn't theirs and anyone can do anything with it. AI output isn't anyone's IP, it can't be copyrighted. |
| |
| ▲ | smj-edison 6 hours ago | parent | prev | next [-] | | I'd distinguish between physical art and digital art tbh. Physical art has already grappled with being automated away with the advent of photography, but people still buy physical art because they like the physical medium and want to support the creator. Digital art (for one off needs), however, is a trickier place since I think that's where AI is displacing. It's not making masterpieces, but if someone wanted a picture of a dwarf for a D&D campaign, they'd probably generate it instead of contracting it out. | |
| ▲ | crooked-v 4 hours ago | parent | prev [-] | | > more specifically because people who are willing to pay money for art seem to strongly prefer that their money goes to an artist, not a GPU cluster operator. Look at furniture. People will pay a premium for handcrafted furniture because it becomes part of the story of the result, even when Ikea offers a basically identical piece (with their various solid-wood items) at a fraction of the price and with a much easier delivery process. Of course, AI art also has the issue that it's effectively impossible to actually dictate details exactly like you want. I've used it for no-profit hobby things (wargames and tabletop games, for example), and getting exact details for anything (think "fantasy character profile using X extensive list of gear in Y specific visual style") takes extensive experimentation (most of which can't be generalized well since it depends on quirks of individual models and sub-models) and photoshopping different results together. If I were doing it for a paid product, just commissioning art would probably be cheaper overall compared to the person-hours involved. |
| |
| ▲ | patcon 3 hours ago | parent | prev | next [-] | | > AND are alienating the creators of these cultural outputs via displacement of labor and payment YES. Thank you for these words. It's a form of ecological collapse. Thought to be fair, the creative ecology has always operated at the margins. But it's a form of library for challenges in the world, like how a rainforest is an archive of genetic diversity, with countless application like antibiotics. If we destroy it, we lose access to the library, to the archive, just as the world is getting even more treacherous and unstable and is in need of creativity | |
| ▲ | vkou 4 hours ago | parent | prev [-] | | > So either training data is going to get more expensive because you're going to have to pay creators, or these models will slowly drift away from the contemporary cultural reality. Nah, more likely is that contemporary cultural reality will just shift to accept the output of the models and we'll all be worse off. (Except for the people selling the models, they'll be better off.) You'll be eating nothing but the cultural equivalent of junk food, because that's all you'll be able to afford. (Not because you don't have the money, but because artists can't afford to eat.) |
|
|
| ▲ | BinaryIgor 8 hours ago | parent | prev | next [-] |
| Yes! One could argue that we might end up with programmers (experts) going through a training of creating software manually first, before becoming operators of AI, and then also spending regularly some of their working time (10 - 20%?) on keeping these skills sharp - by working on purely education projects, in the old school way; but it begs the question: Does it then really speeds us up and generally makes things better? |
| |
|
| ▲ | agumonkey 3 hours ago | parent | prev | next [-] |
| I kinda fear that this is an economic plane stall, we're tilting upward so much, the underlying conditions are about to dissolve And I'd add, that recent LLMs magic (i admit they reached a maturity level that is hard to deny) is also a two edged sword, they don't create abstraction often, they create a very well made set of byproducts (code, conf, docs, else) to realize your demand, but people right now don't need to create new improved methods, frameworks, paradigms because the LLM doesn't have our mental constraints.. (maybe later reasoning LLMs will tackle that, plausibly) |
|
| ▲ | frabonacci 6 hours ago | parent | prev | next [-] |
| The author's conclusion feels even more relevant today: AI automation doesn’t really remove human difficulty—it just moves it around, often making it harder to notice and more risky. And even after a human steps in, there’s usually a lot of follow-up and adjustment work left to do. Thanks for surfacing these uncomfortable but relevant insights |
| |
|
| ▲ | Legend2440 5 hours ago | parent | prev | next [-] |
| >the present generation of automated systems, which are monitored by former manual operators, are riding on their skills, which later generations of operators cannot be expected to have. But we are in the later generation now. All the 1983 operators are now retired, and today's factory operators have never had the experience of 'doing it by hand'. Operators still have skills, but it's 'what to do when the machine fails' rather than 'how to operate fully manually'. Many systems cannot be operated fully manually under any conditions. And yet they're still doing great. Factory automation has been wildly successful and is responsible for why manufactured goods are so plentiful and inexpensive today. |
| |
| ▲ | gmueckl 5 hours ago | parent [-] | | It's not so simple. The knowledge hasn't been transferred to future operators, but to process engineers who are kow in charge of making the processes work reliably through even more advanced automation that requires more complex skills and technology to develop and produce. | | |
| ▲ | Legend2440 5 hours ago | parent [-] | | No doubt, there are people that still have knowledge of how the system works. But operator inexperience didn't turn out to be a substantial barrier to automation, and they were still able to achieve the end goal of producing more things at lower cost. |
|
|
|
| ▲ | fuzzfactor 5 hours ago | parent | prev | next [-] |
| >skills, which later generations of operators cannot be expected to have. You can't ring more true than this. For decades now. For a couple years there I was able to get some ML together and it helped me get my job done, never came close to AI, I only had kilobytes of memory anyway. By the time 1983 rolled around I could see the writing on the wall, AI was going to take over a good share of automation tasks in a more intelligent way by bumping the expert systems up a notch. Sometimes this is going to be a quantum notch and it could end up like "expertise squared" or "productivity squared" [0]. At the rarefied upper bound. Using programmable electronics to multiply the abilities of the true expert whilst simultaneously the expert utilized their abilities to multiply the effectiveness of the electronics. Maybe only reaching the apex when the most experienced domain expert does the programming, or at least runs the show. Never did see that paper, but it was obvious to many. I probably mentioned this before, but that's when I really bucked down for a lifetime of experimental natural science across a very broad range of areas which would be more & more suitable for automation.
While operating professionally within a very narrow niche where personal participation would remain the source of truth long enough for compounding to occur. I had already been a strong automation pioneer in my own environment. So I was always fine regardless of the overall automation landscape, and spent the necessary decades across thousands of surprising edge cases getting an idea how I would make it possible for someone else to even accomplish some of these difficult objectives, or perhaps one day fully automate. If the machine intelligence ever got good enough. Along with the other electronics, which is one of the areas I was concentrating on. One of the key strategies did turn out to be outliving those who had extensive troves of their own findings, but I really have not automated that much. As my experience level becomes less common, people seem to want me to perform in person with greater desire every decade :\ There's related concepts for that too, some more intelligent than others ;) [0] With a timely nod to a college room mate who coined the term "bullshit squared" |
| |
| ▲ | Animats an hour ago | parent [-] | | > By the time 1983 rolled around That early? There were people claiming that back then, but it didn't really work. |
|
|
| ▲ | naveen99 3 hours ago | parent | prev | next [-] |
| I mean how did you get an expert programmer before ? Surely it can’t be harder to learn to program with ai than without ai. It’s written in the book of resnet. You could swap out ai with google or stackoverflow or documentation or unix… |
|
| ▲ | startupsfail 9 hours ago | parent | prev [-] |
| The same argument was there about needing to be an expert programmer in assembly language to use C, and then same for C and Python, and then Python and CUDA, and then Theano/Tensorflow/Pytorch. And yet here we are, able to talk to a computer, that writes Pytorch code that orchestrates the complexity below it. And even talks back coherently sometimes. |
| |
| ▲ | gipp 9 hours ago | parent | next [-] | | Those are completely deterministic systems, of bounded scope. They can be ~completely solved, in the sense that all possible inputs fall within the understood and always correctly handled bounds of the system's specifications. There's no need for ongoing, consistent human verification at runtime. Any problems with the implementation can wait for a skilled human to do whatever research is necessary to develop the specific system understanding needed to fix it. This is really not a valid comparison. | |
| ▲ | wasabi991011 9 hours ago | parent | prev | next [-] | | No, that is a terrible analogy. High level languages are deterministic, fully specified, non-leaky abstractions. You can write C and know for a fact what you are instructing the computer to do. This is not true for LLMs. | | |
| ▲ | ben_w 9 hours ago | parent [-] | | I was going to start this with "C's fine, but consider more broadly: one reason I dislike reactive programming is that the magic doesn't work reliably and the plumbing is harder to read than doing it all manually", but then I realised: While one can in principle learn C as well as you say, in practice there's loads of cases of people getting surprised by undefined behaviour and all the famous classes of bug that C has. | | |
| ▲ | layer8 4 hours ago | parent | next [-] | | There is still the important difference that you can reason with precision about a C implementation’s behavior, based on the C standard and the compiler and library documentation, or its source or machine code when needed. You can’t do that type of reasoning for LLMs, or only to a very limited extent. | |
| ▲ | Bootvis 7 hours ago | parent | prev [-] | | Maybe, but buffer overflows would occur written in assembler written by experts as well. C is a fine portable assembler (could probably be better with the knowledge we have now) but programming is hard. My point: you can roughly expect an expert C programmer to produce as many bugs per unit of functionality as an expert assembly programmer. I believe it to be likely that the C programmer would even writes the code faster and better because of the useful abstractions. An LLM will certainly write the code faster but it will contain more bugs (IME). |
|
| |
| ▲ | the_snooze 8 hours ago | parent | prev [-] | | >And yet here we are, able to talk to a computer, that writes Pytorch code that orchestrates the complexity below it. It writes something that that's almost, but not quite entirely unlike Pytorch. You're putting a little too much value on a simulacrum of a programmer. |
|