Remix.run Logo
gyomu 10 hours ago

This March 2025 post from Aral Balkan stuck with me:

https://mastodon.ar.al/@aral/114160190826192080

"Coding is like taking a lump of clay and slowly working it into the thing you want it to become. It is this process, and your intimacy with the medium and the materials you’re shaping, that teaches you about what you’re making – its qualities, tolerances, and limits – even as you make it. You know the least about what you’re making the moment before you actually start making it. That’s when you think you know what you want to make. The process, which is an iterative one, is what leads you towards understanding what you actually want to make, whether you were aware of it or not at the beginning. Design is not merely about solving problems; it’s about discovering what the right problem to solve is and then solving it. Too often we fail not because we didn’t solve a problem well but because we solved the wrong problem.

When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make. Being handed a baked and glazed artefact that approximates what you thought you wanted to make removes the very human element of discovery and learning that’s at the heart of any authentic practice of creation. Where you know everything about the thing you shaped into being from when it was just a lump of clay, you know nothing about the image of the thing you received for your penny from the vending machine."

helloplanets 10 hours ago | parent | next [-]

And when programming with agentic tools, you need to actively push for the idea to not regress to the most obvious/average version. The amount of effort you need to expend on pushing the idea that deviates from the 'norm' (because it's novel), is actually comparable to the effort it takes to type something out by hand. Just two completely different types of effort.

There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.

fallous 9 hours ago | parent | next [-]

You just described the burden of outsourcing programming.

onion2k 5 hours ago | parent | next [-]

Outsourcing development and vibe coding are incredibly similar processes.

If you just chuck ideas at the external coding team/tool you often get rubbish back.

If you're good at managing the requirements and defining things well you can achieve very good things with much less cost.

darkwater 7 hours ago | parent | prev | next [-]

With the basic and enormous difference that the feedback loop is 100 or even 1000x faster. Which changes the type of game completely, although other issues will probably arise as we try this new path.

Terr_ 6 hours ago | parent [-]

That embeds an assumption that the outsourced human workers are incapable of thought, and experience/create zero feedback loops of their own.

Frustrated rants about deliverables aside, I don't think that's the case.

darkwater 4 hours ago | parent | next [-]

No. It just means the harsh reality: what's really soul crushing in outsourced work is having endless meetings to pass down / get back information, having to wait days/weeks/months to get some "deliverable" back on which iterate etc. Yes, outsourced human workers are totally capable of creative thinking that makes sense, but their incentive will always be throughput over quality, since their bosses usually give closed prices (at least in what I lived personally).

If you are outsourcing to an LLM in this case YOU are still in charge of the creative thought. You can just judge the output and tune the prompts or go deep in more technical details and tradeoffs. You are "just" not writing the actual code anymore, because another layer of abstraction has been added.

Jagerbizzle an hour ago | parent | next [-]

Also, with an LLM you can tell it to throw away everything and start over whenever you want.

When you do this with an outsourced team, it can happen at most once per sprint, and with significant pushback, because there's a desire for them to get paid for their deliverable even if it's not what you wanted or suffers some other fundamental flaw.

raw_anon_1111 an hour ago | parent [-]

Yep, just these past two weeks. I tried to reuse an implementation I had used for another project, it took me a day to modify it (with Codex), I tried it out and it worked fine with a few hundred documents.

Then I tried to push through 50000 documents, it crashed and burned like I suspected. It took one day to go from my second more complicated but more scalable spec where I didn’t depend on an AWS managed service to working scalable code.

It would have taken me at least a week to do it myself

dimitrios1 an hour ago | parent | prev [-]

It doesn't have to be soul crushing.

Just like people more, and have better meetings.

Life is what you make it.

Enjoy yourself while you can.

ambicapter an hour ago | parent | prev [-]

Not really, its just obviously true that the communication cycle with your terminal/LLM is faster than with a human over Slack/email.

tomrod 8 hours ago | parent | prev | next [-]

100%! There is significant analogy between the two!

salawat 8 hours ago | parent [-]

There is a reason management types are drawn to it like flies to shit.

theshrike79 6 hours ago | parent [-]

Working with and communicating with offshored teams is a specific skill too.

There are tips and tricks on how to manage them and not knowing them will bite you later on. Like the basic thing of never asking yes or no questions, because in some cultures saying "no" isn't a thing. They'll rather just default to yes and effectively lie than admit failure.

9 hours ago | parent | prev | next [-]
[deleted]
agumonkey 7 hours ago | parent | prev [-]

We need a new word for on-premise offshoring.

On-shoring ;

aleph_minus_one 7 hours ago | parent | next [-]

> On-shoring

I thought "on-shoring" is already commonly used for the process that undos off-shoring.

saghm 6 hours ago | parent | next [-]

How about "in-shoring"? We already have "insuring" and "ensuring", so we might as well add another confusingly similar sounding term to our vocabulary.

weebull 3 hours ago | parent | next [-]

How about we leave "...shoring" alone?

5 hours ago | parent | prev [-]
[deleted]
boring-human 42 minutes ago | parent | prev [-]

En-shoring?

tmtvl 2 hours ago | parent | prev | next [-]

Rubber-duckying... although a rubber ducky can't write code... infinite-monkeying?

biofox 6 minutes ago | parent [-]

In silico duckying

pferde 4 hours ago | parent | prev | next [-]

Corporate has been using the term "best-shoring" for a couple of years now. To my best guess, it means "off-shoring or on-shoring, whichever of the two is cheaper".

intended 6 hours ago | parent | prev | next [-]

Ai-shoring.

Tech-shoring.

johnisgood 5 hours ago | parent | next [-]

Would work, but with "snoring". :D

dzdt 4 hours ago | parent | prev [-]

vibe-shoring

heliumtera 3 hours ago | parent | prev [-]

We already have a perfect one

Slop;

dkdbejwi383 6 hours ago | parent | prev | next [-]

Fair enough but I am a programmer because I like programming. If I wanted to be a product manager I could have made that transition with or without LLMs.

sgarland an hour ago | parent | next [-]

Agreed. The higher-ups at my company are, like most places, breathlessly talking about how AI has changed the profession - how we no longer need to code, but merely describe the desired outcome. They say this as though it’s a good thing.

They’re destroying the only thing I like about my job - figuring problems out. I have a fundamental impedance mismatch with my company’s desires, because if someone hands me a weird problem, I will happily spend all day or longer on that problem. Think, hypothesize, test, iterate. When I’m done, I write it up in great detail so others can learn. Generally, this is well-received by the engineer who handed the problem to me, but I suspect it’s mostly because I solved their problem, not because they enjoyed reading the accompanying document.

raw_anon_1111 2 hours ago | parent | prev [-]

I’m a programmer (well half my job) because I was a short (still short) fat (I got better) kid with a computer in the 80s.

Now, the only reason I code and have been since the week I graduated from college was to support my insatiable addictions to food and shelter.

While I like seeing my ideas come to fruition, over the last decade my ideas were a lot larger than I could reasonably do over 40 hours without having other people working on projects I lead. Until the last year and a half where I could do it myself using LLMs.

Seeing my carefully designed spec that includes all of the cloud architecture get done in a couple of days - with my hands on the wheel - that would have taken at least a week with me doing some work while juggling dealing with a couple of other people - is life changing

GCUMstlyHarmls 9 hours ago | parent | prev | next [-]

I can't help but imagine training horses vs training cats. One of them is rewarding, a pleasure, beautiful to see, the other is frustrating, leaves you with a lot of scratches and ultimately both of you "agreeing" on a marginal compromise.

lambdaone 4 hours ago | parent | next [-]

Right now vibe coding is more like training cats. You are constantly pushing against the model's tendency to produce its default outputs regardless of your directions. When those default outputs are what you want - which they are in many simple cases of effectively English-to-code translation with memorized lookup - it's great. When they are not, you might as well write the code yourself and at least be able to understand the code you've generated.

kimixa 3 hours ago | parent [-]

Yup - I've related it to working with Juniors, often smart and have good understandings and "book knowledge" of many of the languages and tools involved, but you often have to step back and correct things regularly - normally around local details and project specifics. But then the "junior" you work with every day changes, so you have to start again from scratch.

I think there needs to be a sea change in the current LLM tech to make that no longer the case - either massively increased context sizes, so they can contain near a career worth of learning (without the tendency to start ignoring that context, as the larger end of the current still-way-too-small-for-this context windows available today), or even allow continuous training passes to allow direct integration of these "learnings" into the weights themselves - which might be theoretically possible today, but is many orders of magnitude higher in compute requirements than available today even if you ignore cost.

throwthrowuknow an hour ago | parent [-]

Try writing more documentation. If your project is bigger than a one man team then you need it anyways and with LLM coding you effectively have an infinite man team.

KptMarchewa 3 hours ago | parent | prev [-]

I've never seen horse that scratches you.

rixed 3 hours ago | parent | prev | next [-]

To me it feels a bit like literate programming, it forces you to form a much more accurate idea of your project before your start. Not a bad thing, but can be wasteful also when eventually you realise after the fact that the idea was actually not that good :)

fflluuxx 4 hours ago | parent | prev | next [-]

This is why people thinkless of artists like Damien Hirst and Jeff Koons because their hands have never once touched the art. They have no connection to the effort. To the process. To the trail and error. To the suffer. They’ve out sourced it, monetized it, and make it as efficient as possible. It’s also soulless.

jiveturkey 9 hours ago | parent | prev | next [-]

> need to make it crystal clear

That's not an upside in that it's unique to LLM vs human written code. When writing it yourself, you also need to make it crystal clear. You do that in the language of implementation.

balamatom 5 hours ago | parent | next [-]

And programming languages are designed for clarifying the implementation details of abstract processes; while human language is this undocumented, half grandfathered in, half adversarially designed instrument for making apes get along (as in, move in the same general direction) without excessive stench.

The humane and the machinic need to meet halfway - any computing endeavor involves not only specifying something clearly enough for a computer to execute it, but also communicating to humans how to benefit from the process thus specified. And that's the proper domain not only of software engineering, but the set of related disciplines (such as the various non-coding roles you'd have in a project team - if you have any luck, that is).

But considering the incentive misalignments which easily come to dominate in this space even when multiple supposedly conscious humans are ostensibly keeping their eyes on the ball, no matter how good the language machines get at doing the job of any of those roles, I will still intuitively mistrust them exactly as I mistrust any human or organization with responsibly wielding the kind of pre-LLM power required for coordinating humans well enough to produce industrial-scale LLMs in the first place.

What's said upthread about the wordbox continually trying to revert you to the mean as you're trying to prod it with the cowtool of English into outputting something novel, rings very true to me. It's not an LLM-specific selection pressure, but one that LLMs are very likely to have 10x-1000xed as the culmination of a multigenerational gambit of sorts; one whose outset I'd place with the ever-improving immersive simulations that got the GPU supply chain going.

8 hours ago | parent | prev [-]
[deleted]
Der_Einzige 9 hours ago | parent | prev [-]

Yet another example of "comments that are only sort of true because high temperature sampling isn't allowed".

If you use LLMs at very high temperature with samplers which correctly keep your writing coherent (i.e. Min_p, or better like top-h, P-less decoding, etc), than "regression to the mean" literally DOES NOT HAPPEN!!!!

hnlmorg 7 hours ago | parent | next [-]

Have you actually tried high temperature values for coding? Because I don’t think it’s going to do what you claim it will.

LLMs don’t “reason” the same way humans do. They follow text predictions based on statistical relevance. So raising the temperature will more likely increase the likelihood of unexecutable pseudocode than it would create a valid but more esoteric implementation of a problem.

Terr_ 6 hours ago | parent | next [-]

To put it another way, a high-temperature mad-libs machine will write a very unusual story, but that isn't necessarily the same as a clever story.

balamatom 4 hours ago | parent [-]

So why is this "temperature" not on, like, a rotary encoder?

So you can just, like, tweak it when it's working against your intent in either direction?

bob1029 6 hours ago | parent | prev [-]

High temperature seems fine for my coding uses on GPT5.2.

Code that fails to execute or compile is the default expectation for me. That's why we feed compile and runtime errors back into the model after it proposes something each time.

I'd much rather the code sometimes not work than to get stuck in infinite tool calling loops.

adevilinyc 9 hours ago | parent | prev [-]

How do you configure LLM température in coding agents, e.g. opencode?

kabr 8 hours ago | parent | next [-]

https://opencode.ai/docs/agents/#temperature

set it in your opencode.json

Der_Einzige 9 hours ago | parent | prev [-]

You can't without hacking it! That's my point! The only places you can easily are via the API directly, or "coomer" frontends like SillyTavern, Oobabooga, etc.

Same problem with image generation (lack of support for different SDE solvers, the image version of LLM sampling) but they have different "coomer" tools, i.e. ComfyUI or Automatic1111

yoyohello13 8 hours ago | parent [-]

Once again, porn is where the innovation is…

dizhn 6 hours ago | parent [-]

Please.. "Creative Writing"

dwaite an hour ago | parent | prev | next [-]

Supposedly when Michelangelo was asked about how he created the statue of David, he said "I just chipped away everything that wasn’t David.”

Your work is influenced by the medium by which you work. I used to be able to tell very quickly if a website was developed in Ruby on Rails, because some approaches to solve a problem are easy and some contain dragons.

If you are coding in clay, the problem is getting turned into a problem solvable in clay.

The challenge if you are directing others (people or agents) to do the work is that you don't know if they are taking into account the properties of the clay. That may be the difference between clean code - and something which barely works and is unmaintainable.

I'd say in both cases of delegation, you are responsible for making sure the work is done correctly. And, in both cases, if you do not have personal experiences in the medium you may not be prepared to judge the work.

socalgal2 6 hours ago | parent | prev | next [-]

To me it's all abstraction. I didn't write my own OS. I didn't write my own compiler. I didn't write the standard library. I just use them. I could write them but I'm happy to work on the new thing that uses what's already there.

This is no different than many things. I could grow a tree and cut it into wood but I don't. I could buy wood and nails and brackets and make furniture but I don't. I instead just fill my house/apartment with stuff already made and still feel like it's mine. I made it. I decided what's in it. I didn't have to make it all from scratch.

For me, lots of programming is the same. I just want to assemble the pieces

> When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make

No, your favorite movie is not crap because the creators didn't grind their own lens. Popular and highly acclaimed games not at crap because they didn't write their own physics engine (Zelda uses Havok) or their own game engine (Plenty of great games use Unreal or Unity)

cowboylowrez 4 hours ago | parent | next [-]

When I read discussions about this sort of thing, I often find that folks look harder for similarities and patterns but once they succeed here, they ignore the differences. AI in particular is so full of this "pattern matching" style of thinking that the really significance of this tech, ie., how absolutely new and different it is, yeah it just sort of goes ignored, or even worse, machines get "pattern matched" into humans and folks argue from that point of view lol witness all the "new musicians" who vibe code disco hits, I'll invariably see the argument that AIs train on existing music just like humans do so whats the big deal?

But these arguments and the OP's article do reinforce that AI rots brains. Even my sparing use of googles gemini and my interaction with the bots here have really dinged my ability to do simple math.

Krssst 6 hours ago | parent | prev | next [-]

OS and compilers have a deterministic public interface. They obey a specification developers know, so you they can be relied on to write correct software that depends on them even without knowing the internal behavior. Generative AI does not have those properties.

signatoremo an hour ago | parent | next [-]

> They obey a specification developers know

Which spec? Is there a spec that says if you use a particular set of libraries you’d get less than 10 millisecond response? You can’t even know that for sure if you roll your own code, with no 3rd party libraries.

Bugs are by definition issues arise when developers expect they code to do one thing, but it does another thing, because of unforeseen combination of factors. Yet we all are ok with that. That’s why we accept AI code. They work well enough.

raw_anon_1111 an hour ago | parent | prev | next [-]

Yes but developers don’t have a deterministic interface. I still had to be careful about writing out my specs and make sure they were followed. At least I don’t have to watch my tone when my two mid level ticket taking developers - Claude and Codex - do something stupid. They also do it a lot faster

refactor_master 5 hours ago | parent | prev [-]

But the code you’re writing is guard railed by your oversight, the tests you decide on and the type checking.

So whether you’re writing the spec code out by hand or ask an LLM to do it is besides the point if the code is considered a means to an end, which is what the post above yours was getting at.

skydhash 3 hours ago | parent [-]

Tests and type checking are often highway-wide guardrails when the path you want to take is like a tightrope.

Also the code is not a means to an end. It’s going to be run somewhere doing stuff someone wants to do reliably and precisely. The overall goal was ever to invest some programmer time and salary in order to free more time for others. Not for everyone to start babysitting stuff.

jstanley 4 hours ago | parent | prev | next [-]

> I didn't write my own OS. I didn't write my own compiler. I didn't write the standard library. I just use them. I could write them

Maybe, but beware assuming you could do something you haven't actually tried to do.

Everything is easy in the abstract.

Hendrikto 4 hours ago | parent | prev | next [-]

> No, your favorite movie is not crap because the creators didn't grind their own lens.

But Pulp Fiction would not have been a masterpiece if Tarantino just typed “Write a gangster movie.” into a prompt field.

adriand 2 hours ago | parent | next [-]

> But Pulp Fiction would not have been a masterpiece if Tarantino just typed “Write a gangster movie.” into a prompt field.

Doesn’t that prove the point? You could do that right now, and it would be absolute trash. Just like how right now we are nowhere close to being able to make great software with a single prompt.

I’ve been vibecoding a side project and it has been three months of ideating, iterating, refining and testing. It would have taken me immeasurably longer without these tools, but the end result is still 100% my vision, and it has been a tremendous amount of work.

heliumtera 3 hours ago | parent | prev [-]

And if he did, why would I prefer using his prompt instead of mine?

"Write a gangster movie that I like", instead of "...a movie this other guy likes".

But because this is not the case, we appreciate Tarantino more than we appreciate gangster movies. It is about the process.

dropofwill 2 hours ago | parent [-]

This is exactly the process happening in the music space with Suno. Go to their subreddit, they all talk about how they only listen to ‘their’ songs, for the exact reasons you list.

Its bleak out there.

yason 5 hours ago | parent | prev | next [-]

The creative process is not dependent on the abstraction.

> For me, lots of programming is the same. I just want to assemble the pieces

How did those pieces came to be? By someone assembling other pieces or by someone crafting them together out of nowhere because nobody else had written them by the time?

Of course you reuse other parts and abstractions to do whatever things that you're not working on but each time you do something that hasn't been done before you can't but engage the creative process, even if you're sitting on top of 50 years worth of abstractions.

In other words, what a programmer essentially has is a playfield. And whether the playfield is a stack of transistors or coding agents, when you program you create something new even if it's defined and built in terms of the playfield.

tonyedgecombe 5 hours ago | parent | prev | next [-]

>I instead just fill my house/apartment with stuff already made and still feel like it's mine.

I'm starting to wonder if we lose something in all this convenience. Perhaps my life is better because I cook my own food, wash my own dishes, chop my own firewood, drive my own car, write my own software. Outwardly the results look better the more I outsource but inwardly I'm not so sure.

On the subject of furnishing your house the IKEA effect seems to confirm this.

https://en.wikipedia.org/wiki/IKEA_effect

globular-toast 6 hours ago | parent | prev | next [-]

There are two stages to becoming a decent programmer: first you learn to use abstraction, then you learn when not to use abstraction.

Trying to find the right level is the art. Once you learn the tools of the trade and can do abstraction, it's natural to want to abstract everything. Most programmers go through such a phase. But sometimes things really are distinct and trying to find an abstraction that does both will never be satisfactory.

When building a house there are generally a few distinct trades that do the work: bricklayers, joiners, plumbers, electricians etc. You could try to abstract them all: it's all just joining stuff together isn't it? But something would be lost. The dangers of working with electricity are completely different to working with bricks. On the other hand, if people were too specialised it wouldn't work either. You wouldn't expect a whole gang of electricians, one who can only do lighting, one who can only do sockets, one who can only do wiring etc. After centuries of experience we've found a few trades that work well together.

So, yes, it's all just abstraction, but you can go too far.

knollimar 35 minutes ago | parent | next [-]

In higher end work they do have specialized lighting, branch power, and feeder electricians. And among feeder even special ones for medium voltage etc

throwaway132448 5 hours ago | parent | prev [-]

Well said, great analogy. Sometimes the level of abstraction feels arbitrary - you have to understand the circumstances that led there to see why it's not.

Trasmatta 2 hours ago | parent | prev [-]

Did you not read the post? You're talking from the space of the Builder while neglecting the Thinker. That's fine for some people, but not for others.

raw_anon_1111 2 hours ago | parent | prev | next [-]

In 30 years across 10 jobs, the companies I’ve worked for have not paid me to “code”. They’ve paid me to use my experience to add more business value than the total cost of employing me.

I’m no less proud of what I built in the last three weeks using three terminal sessions - one with codex, one with Claude, and one testing everything from carefully designed specs - than I was when I first booted a computer, did “call -151” to get to the assembly language prompt on my Apple //e in 1986.

The goal then was to see my ideas come to life. The goal now is to keep my customers happy, get projects done on time, on budget and meets requirements and continue to have my employer put cash in my account twice a month - and formerly put AMZN stock in my brokerage account at vesting.

jstanley 4 hours ago | parent | prev | next [-]

But you can move a layer up.

Instead of pouring all of your efforts into making one single static object with no moving parts, you can simply specify the individual parts, have the machine make them for you, and pour your heart and soul into making a machine that is composed of thousands of parts, that you could never hope to make if you had to craft each one by hand from clay.

We used to have a way to do this before LLMs, of course: we had companies that employed many people, so that the top level of the company could simply specify what they wanted, and the lower levels only had to focus on making individual parts.

Even the person making an object from clay is (probably) not refining his own clay or making his own oven.

berkes 3 hours ago | parent | next [-]

> we had companies that employed many people, so that the top level of the company could simply specify what they wanted, and the lower levels only had to focus on making individual parts.

I think this makes a perfect counter-example. Because this structure is an important reason for YC to exist and what the HN crowd often rallies against.

Such large companies - generally - don't make good products. Large companies rarely make good products in this way. Most, today, just buy companies that built something in the GP's cited vein: a creative process, with pivots, learnings, more pivots, failures or - when successful - most often successful in an entirely different form or area than originally envisioned. Even the large tech monopolies of today originated like that. Zuckerberg never envisioned VR worlds, photo-sharing apps, or chat apps, when he started the campus-fotobook-website. Bezos did not have some 5d-chess blueprint that included the largest internet-infrastructure-for-hire when he started selling books online.

If anything, this only strengthens the point you are arguing against: a business that operates by a "head" "specifying what they want" and having "something" figure out how to build the parts, is historically a very bad and inefficient way to build things.

i7l 3 hours ago | parent | prev | next [-]

And therein lies the crux: some people love to craft each part themselves, whereas others love to orchestrate but not manufacture each part.

With LLMs and engineers often being forced by management to use them, everyone is pushed to become like the second group, even though it goes against their nature. The former group see the part as a means, whereas the latter view it as the end.

Some people love the craft itself and that is either taken away or hollowed out.

ChrisMarshallNY 3 hours ago | parent | prev | next [-]

This is really what it’s about.

As someone that started with Machine Code, I'm grateful for compiled -even interpreted- languages. I can’t imagine doing the kind of work that I do, nowadays, in Machine Code.

I’m finding it quite interesting, using LLM-assisted development. I still need to keep an eye on things (for example, the LLM tends to suggest crazy complex solutions, like writing an entire control from scratch, when a simple subclass, and five lines of code, will work much better), but it’s actually been a great boon.

I find that I learn a lot, using an LLM, and I love to learn.

croes 3 hours ago | parent [-]

But we become watchers instead of makers.

There is a difference between cooking and putting a ready meal into the microwave.

Both satisfy your hunger but only one can give some kind of pride.

ChrisMarshallNY 2 hours ago | parent | next [-]

Eh. I've had pride in my work for over 40 years.

The tools change, but the spirit only grows.

szundi 2 hours ago | parent | prev [-]

[dead]

amelius 4 hours ago | parent | prev | next [-]

Yes, but bad ingredients do not make a yummy pudding.

Or, it's like trying to make a MacBook Pro by buying electronics boards from AliExpress and wiring them together.

jstanley 4 hours ago | parent [-]

I'd rather have a laptop made from AliExpress components than only have a single artisanal hand-crafted resistor.

i7l 3 hours ago | parent | next [-]

That's a false dichotomy, because transistors and ICs are manufactured to be deterministic and nearly perfect. LLMs can never be guaranteed to be like that.

Yes, some things are better when manufactured in highly automated ways (like computer chips), but their design has been thoroughly tested and before shipping the chips themselves go through lots of checks to make sure they are correct. LLM code is almost never treated that way today.

amelius 3 hours ago | parent | prev | next [-]

Yes, the point is that only if you're willing to accept crappy results then you can use AI to build bigger things.

sdoering 3 hours ago | parent | next [-]

To me that seems like a spurious (maybe even false) dichotomy. You can have crappy results without AI. And you can have great results with AI.

Your contrast is an either or, that - in the real world - does not exist.

Take content written by AI, prompted by a human. A lot of it is slop and crap. And there will be more slop and crap with AI than before. But that was the case, when the medium changed from hand writen to printed books. And when paper and printing became cheap, we had slop like those 10 Cent Western or Romance novellas.

We also still had Goethe, still had Kleist, still had Grass (sorry, very German centric here).

We also have Inception vs. the latest sequel of any Marvel franchise.

I have seen AI writen, but human prompted short stories, that made people well up and find ideas presented in a light not seen before. And I have seen AI generated stories that one wants to purge from my brain.

It isn't the tool - it is the one yielding it.

Question: Did photoshop kill photography? Because honestly, this AI discussion to me sounds very much like the discussion back then.

weebull 3 hours ago | parent | next [-]

> Question: Did photoshop kill photography? Because honestly, this AI discussion to me sounds very much like the discussion back then.

It killed an aspect of it. The film processing in the darkroom. Even before digital cameras were ubiquitous it was standard to get a scan before doing any processing digitally. Chemical processing was reduced the minimum necessary.

3 hours ago | parent | prev | next [-]
[deleted]
amelius 2 hours ago | parent | prev [-]

Lightroom killed photography.

mlrtime 3 hours ago | parent | prev [-]

I was going to reply defending AI tooling and crappy results, but I think I'm done with it.

I think there are just a class of people know that think that you cannot get 'macbook' quality with a LLM. I don't know why I try to convince them, it's not in my benefit.

3 hours ago | parent | prev [-]
[deleted]
sfn42 3 hours ago | parent | prev [-]

It's more like the chess.com vs lichess example in my mind. On the one hand you have a big org, dozens of devs, on the other you have one guy doing a better job.

It's amazing what one competent developer can do, and it's amazing how little a hundred devs end up actually doing when weighed down by beaurocracy. And lets not pretend even half of them qualify as competent, not to mention they probably don't care either. They get to work and have a 45 min coffee break, move some stuff around in the Kanban board, have another coffee break, then lunch, then foosball etc. Ad when they actually write some code it's ass.

And sure, for those guys maybe LLMs represent a huge productivity boost. For me it's usually faster to do the work myself than to coax the bot into creating something acceptable.

giancarlostoro 16 minutes ago | parent | prev | next [-]

The best analogy I think is, if you just take Stack Overflow code solutions, smoosh over your code and hit compile / build, and move on without ever looking at "why it works" you're really not using your skills to the best of your ability, and it could introduce bugs you didn't expect, or completely unnecessary dependencies. With Stack Overflow you can have other people pointing out the issues with the accepted answer and giving you better options.

sodapopcan 13 minutes ago | parent [-]

This keeps coming up again and again and again, but like how many times were you able to copy paste SO solution wholesale and just have it work? Other than for THE most simple cases (usually CSS) there would always have to be some understanding involved. Of course you don't always learn deeply every time, but the whole "copy paste off of stackoverflow" was always an exaggeration that is being used in seeming earnest.

abhgh 8 hours ago | parent | prev | next [-]

This is an amazing quote - thank you. This is also my argument for why I can't use LLMs for writing (proofreading is OK) - what I write is not produced as a side-effect of thinking through a problem, writing is how I think through a problem.

Cthulhu_ 6 hours ago | parent | next [-]

Counterpoint (more devil's advocate), I'd argue it's better than an LLM writes something (e.g. the solution or thinking through of a problem) than nothing at all.

Counterpoint to my own counterpoint, will anyone actually (want to) read it?

counterpoint to the third degree, to loop it back around, an LLM might and I'd even argue an LLM is better at reading and ingesting long text (I'm thinking architectural documentation etc) than humans are. Speaking for myself, I struggle to read attentively through e.g. a document, I quickly lose interest and scan read or just focus on what I need instead.

yurishimo 3 hours ago | parent [-]

I kinda saw this happen in realtime on reddit yesterday. Someone asked for advice on how to deal with a team that was in over their heads shipping slop. The crux of their question was fair, but they used a different LLM to translate their original thoughts from their native language into English. The prompt was "translate this to english for a reddit post" - nothing else.

The LLM adding a bunch of extra formatting to add emphasis and structure to what might have originally been a bit of a ramble, but obviously human written. The comments absolutely lambasted this OP for being a hypocrite complaining about their team using AI, but then seeing little problem with posting what is obviously an AI generated question because the OP didn't deem their English skills good enough to ask the question directly.

I'm not going to pass judgement on this scenario, but I did think the entire encounter was a "fun" anecdote in addition to your comments.

Edit: wrods

vict7 3 hours ago | parent [-]

I saw the same post and was a bit saddened that all the comments seemed to be focused on the implied hypocrisy of the OP instead of addressing the original concern.

As someone that’s a bit of a fence-sitter on the matter of AI, I feel that using it in the way that OP did is one of the less harmful or intrusive uses.

duskdozer an hour ago | parent [-]

I see it as worse because you could have put just as much effort in - less even - and gotten a better result just sticking it in a machine translator and pasting that.

samusiam 4 hours ago | parent | prev [-]

Writing is how I think through a problem too, but that also applies to writing and communicating with an AI coding agent. I don't need to write the code per se to do the thinking.

skydhash 3 hours ago | parent [-]

You could write pseudocode as well. Bit fo someone who is familiar with a programming language, it’s just faster to use the latter. And if you’re really familiar with the language, you start thinking in it.

nindalf an hour ago | parent | prev | next [-]

For me it’s a related but different worry. If I’m no longer thinking deeply, then maybe my thinking skills will simply atrophy and die. Then when I really need it, I won’t have it. I’ll be reduced to yanking the lever on the AI slot machine, hoping it comes up with something that’s good enough.

But at that point, will I even have the ability to distinguish a good solution from a bad one? How would I know, if I’ve been relying on AI to evaluate if ideas are good or not? I’d just be pushing mediocre solutions off as my own, without even realising that they’re mediocre.

tcgv an hour ago | parent | prev | next [-]

I get what he's pointing at: building teaches you things the spec can't, and iteration often reveals the real problem.

That said, the framing feels a bit too poetic for engineering. Software isn't only craft, it's also operations, risk, time, budget, compliance, incident response, and maintenance by people who weren't in the room for the "lump of clay" moment. Those constraints don't make the work less human; they just mean "authentic creation" isn't the goal by itself.

For me the takeaway is: pursue excellence, but treat learning as a means to reliability and outcomes. Tools (including LLMs) are fine with guardrails, clear constraints up front and rigorous review/testing after, so we ship systems we can reason about, operate, and evolve (not just artefacts that feel handcrafted).

rsyring 33 minutes ago | parent [-]

> That said, the framing feels a bit too poetic for engineering.

I wholeheartedly disagree but I tend to believe that's going to be highly dependent on what type of developer a person is. One who leans towards the craftsmanship side or one who leans towards the deliverables side. It will also be impacted by the type of development they are exposed to. Are they in an environment where they can even have a "lump of clay" moment or is all their time spent on systems that are too old/archaic/complex/whatever to ever really absorb the essence of the problem the code is addressing?

The OP's quote is exactly how I feel about software. I often don't know exactly what I'm going to build. I start with a general idea and it morphs towards excellence by the iteration. My idea changes, and is sharpened, as it repeatedly runs into reality. And by that I mean, it's sharpened as I write and refactor the code.

I personally don't have the same ability to do that with code review because the amount of time I spend reviewing/absorbing the solution isn't sufficient to really get to know the problem space or the code.

Cuervo_ 4 hours ago | parent | prev | next [-]

I personally have found success with an approach that's the inverse of how agents are being used generally.

I don't allow my agent to write any code. I ask it for guidance on algorithms, and to supply the domain knowledge that I might be missing. When using it for game dev for example, I ask it to explain in general terms how to apply noise algorithms for procedural generation, how to do UV mapping etc, but the actual implementation in my language of choice is all by hand.

Honestly, I think this is a sweet spot. The amount of time I save getting explanations of concepts that would otherwise get a bit of digging to get is huge, but I'm still entirely in control of my codebase.

shsksj 2 hours ago | parent [-]

Yep, this is the sweet spot. Though I still let it type code a lot - boilerplate stuff I’d be bored out of my mind typing. And I’ve found it has an extremely high success rate typing that code on top of its very easy for me to review that code. No friction at all. Granted this is often no larger than 100 lines or so (across various files).

If it takes you more than a few seconds or so to understand code an agent generated you’re going to make mistakes. You should know exactly what it’s going to produce before it produces it.

anymouse123456 an hour ago | parent | prev | next [-]

Having a background in fine art (and also knew Aral many years ago!), this prose resonates heavily with me.

Most of the OP article also resonated with me as I bounce back and forth between learning (consuming, thinking, pulling, integrating new information) to building (creating, planning, doing) every few weeks or months. I find that when I'm feeling distressed or unhappy, I've lingered in one mode or the other a little too long. Unlike the OP, I haven't found these modes to be disrupted by AI at all, in fact it feels like AI is supporting both in ways that I find exhilarating.

I'm not sure OP is missing anything because of AI per se, it might just be that they are ready to move their focus to broader or different problem domains that are separate from typing code into an IDE?

For me, AI has allowed me to probe into areas that I would have shied away from in the past. I feel like I'm being pulled upward into domains that were previously inaccessible.

I use Claude on a daily basis, but still find myself frequently hand-writing code as Claude just doesn't deliver the same results when creating out of whole cloth.

Claude does tend to make my coarse implementations tighter and more robust.

I admittedly did make the transition from software only to robotics ~6 years ago, so the breadth of my ignorance is still quite thrilling.

oceanplexian 7 hours ago | parent | prev | next [-]

Coding is not at all like working a lump of clay unless you’re still writing assembly.

You’re taking a bunch of pre-built abstractions written by other people on top of what the computer is actually doing and plugging them together like LEGOs. The artificial syntax that you use to move the bricks around is the thing you call coding.

The human element of discovery is still there if a robot stacks the bricks based on a different set of syntax (Natural Language), nothing about that precludes authenticity or the human element of creation.

a_better_world a minute ago | parent | next [-]

changing "clay" for "legos" doesn't change the core argument. The tactile feel you get for the medium as you work it with your hands and the "artificial syntax" imposed by the medium.

vaylian 6 hours ago | parent | prev | next [-]

> You’re taking a bunch of pre-built abstractions written by other people on top of what the computer is actually doing and plugging them together like LEGOs.

Correct. However, you will probably notice that your solution to the problem doesn't feel right, when the bricks that are available to you, don't compose well. The AI will just happily smash together bricks and at first glance it might seem that the task is done.

Choosing the right abstraction (bricks) is part of finding the right solution. And understanding that choice often requires exploration and contemplation. AI can't give you that.

Cthulhu_ 6 hours ago | parent [-]

Not yet, anyway; I do trust LLMs for writing snippets or features at this point, but I don't trust them for setting up new applications, technology choices, architectures, etc.

The other day people were talking about metrics, the amount of lines of code people vs LLMs could output in any given time, or the lines of code in an LLM assisted application - using LOC as a metric for productivity.

But would an LLM ever suggest using a utility or library, or re-architecture an application, over writing their own code?

I've got a fairly simple application, renders a table (and in future some charts) with metrics. At the moment all that is done "by hand", last features were stuff like filtering and sorting the data. But that kind of thing can also be done by a "data table" library. Or the whole application can be thrown out in favor of a workbook (one of those data analysis tools, I'm not at home in that are at all). That'd save hundreds of lines of code + maintenance burden.

z3dd 4 hours ago | parent [-]

I was creating a Jira/bb wrapper with node recently and Claude actually used plenty of libraries to solve some tasks.

Nab443 3 hours ago | parent [-]

Same with gpt, but I felt it was more like "hey, everyone uses that, so why not me" than finding the right tool for the job. Can't say for Claude.

hennell 5 hours ago | parent | prev | next [-]

It depends what you're doing not really what you do it with.

I can do some crud apps where it's just data input to data store to output with little shaping needed. Or I can do apps where there's lots of filters, actions and logic to happen based on what's inputted that require some thought to ensure actually solve the problem it's proposed for.

"Shaping the clay" isn't about the clay, it's about the shaping. If you have to make a ball of clay and also have to make a bridge of Lego a 175kg human can stand on, you'll learn more about Lego and building it than you will about clay.

Get someone to give you a Lego instruction sheet and you'll learn far less, because you're not shaping anymore.

satvikpendem 7 hours ago | parent | prev | next [-]

Exactly, and that's why I find AI coding solving this well, because I find it tedious to put the bricks together for the umpteenth time when I can just have an AI do it (which I will of course verify the code when it's done, not advocating for vibe coding here).

This actually leaves me with a lot more time to think, about what I want the UI to look like, how I'll market my software, and so on.

Jensson 6 hours ago | parent | prev | next [-]

> Coding is not at all like working a lump of clay unless you’re still writing assembly.

Isn't the analogy apt? You can't make a working car using a lump of clay, just a car statue, a lump of clay is already an abstraction of objects you can make in reality.

balamatom 4 hours ago | parent [-]

Bingo.

lsy 7 hours ago | parent | prev | next [-]

I think the analogy to high level programming languages misunderstands the value of abstraction and notation. You can’t reason about the behavior of an English prompt because English is underspecified. The value of code is that it has a fairly strong semantic correlation to machine operations, and reasoning about high level code is equivalent to reasoning about machine code. That’s why even with all this advancement we continue to check in code to our repositories and leave the sloppy English in our chat history.

skydhash 3 hours ago | parent [-]

Yep. Any statement in python or others can be mapped to something that the machine will do. And it will be the same thing every single time (concurrency and race issue aside). There’s no english sentence that can be as clear.

We’ve created formal notation to shorten writing. And computation is formal notation that is actually useful. Why write pages of specs when I could write a few lines of code?

hnlmorg 7 hours ago | parent | prev [-]

You’re both right. It just depends on the problems you’re solving and the languages you use.

I find languages like JavaScript promote the idea that of “Lego programming” because you’re encouraged to use a module for everything.

But when you start exploring ideas that haven’t been thoroughly explored already, and particularly in systems languages which are less zealous about DRY (don’t repeat yourself) methodologies, the you can feel a lot more like a sculptor.

Likewise if you’re building frameworks rather than reusing them.

So it really depends on the problems you’re solving.

For general day-to-day coding for your average 9-to-5 software engineering job, I can definitely relate to why people might think coding is basically “LEGO engineering”.

resters 2 hours ago | parent | prev | next [-]

It's very similar now, you have to surf a swell of selective ignorance that is (feels?) less reliable than the ignorance that one adopts when using a dependency one hasn't read and understood the source code for.

One must be conversant in abstractions that are themselves ephemeral and half hallucinated. It's a question of what to cling to, what to elevate beyond possible hallucinated rubbish. At some level it's a much faster version of the meastspace process and it can be extermely emotionally uncomfortable and anarchic to many.

nielsbot 6 hours ago | parent | prev | next [-]

While there is still a market for artisanal furniture, dishes and clothes most people buy mass-produced dishes, clothes and furniture.

I wonder if software creation will be in a similar place. There still might be a small market for handmade software but the majority of it will be mass produced. (That is, by LLM or even software itself will mostly go away and people will get their work done via LLM instead of "apps")

Cthulhu_ 6 hours ago | parent | next [-]

As with furniture, it's supply vs demand, and it's a discussion that goes back decades at this point.

Very few people (even before LLM coding tools) actually did low level "artisanal" coding; I'd argue the vast majority of software development goes into implementing features in b2b / b2c software, building screens, logins, overviews, detail pages, etc. That requires (required?) software engineers too, and skill / experience / etc, but it was more assembling existing parts and connecting them.

Years ago there was already a feeling that a lot of software development boiled down to taping libraries together.

Or from another perspective, replace "LLM" with "outsourcing".

pinkgolem 6 hours ago | parent | prev | next [-]

I would argue the opposite..

What you get right now is mass replicated software, just another copy of sap/office/Spotify/whatever

That software is not made individually for you, you get a copy like millions of other people and there is nearly no market anymore for individual software.

Llms might change that, we have a bunch of internal apps now for small annoying things..

They all have there quirks, but are only accessible internally and make life a little bit easier for people working for us.

Most of them are one shot llms things, throw away if you do not need it anymore or just one shoot again

Cthulhu_ 6 hours ago | parent [-]

The question is whether that's a good thing or not; software adages like "Not Invented Here" aren't going to go away. For personal tools / experiments it's probably fine, just like hacking together something in your spare time, but it can become a risk if you, others, or a business start to depend on it (just like spare time hacked tools).

I'd argue that in most cases it's better to do some research and find out if a tool already exists, and if it isn't exactly how you want it... to get used to it, like one did with all other tools they used.

williamcotton 2 hours ago | parent [-]

> it can become a risk if you, others, or a business start to depend on it (just like spare time hacked tools).

So that Excel spreadsheet that manages the entire sales funnel?

intended 5 hours ago | parent | prev [-]

Acceptance of mass production is only post establishment of quality control.

Skipping over that step results in a world of knock offs and product failures.

People buy Zara or H&M because they can offload the work of verifying quality to the brand.

This was a major hurdle that mass manufacturing had to overcome to achieve dominance.

pixl97 an hour ago | parent [-]

>Acceptance of mass production is only post establishment of quality control.

Hence why a lot of software development is gluing libraries together these days.

CraigJPerry 4 hours ago | parent | prev | next [-]

>> Coding is like

That description is NOT coding, coding is a subset of that.

Coding comes once you know what you need to build, coding is the process of you expressing that in a programming language and as you do so you apply all your knowledge, experience and crucially your taste, to arrive at an implementation which does what's required (functionally and non-functionally) AND is open to the possibility of change in future.

Someone else here wrote a great comment about this the other day and it was along the lines of if you take that week of work described in the GP's comment, and on the friday afternoon you delete all the code checked in. Coding is the part to recreate the check in, which would take a lot less than a week!

All the other time was spent turning you into the developer who could understand why to write that code in the first place.

These tools do not allow you to skip the process of creation. They allow you to skip aspects of coding - if you choose to, they can also elide your tastes but that's not a requirement of using them, they do respond well to examples of code and other directions to guide them in your tastes. The functional and non-functional parts they're pretty good at without much steering now but i always steer for my tastes because, e.g. opus 4.5 defaults to a more verbose style than i care for.

pikzel 4 hours ago | parent [-]

It's all individual. That's like saying writing only happens when you know exactly the story to tell. I love open a blank project with a vague idea of what I want to do, and then just start exploring while I'm coding.

pixl97 an hour ago | parent [-]

I'm sure some coding works this way, but I'd be surprised if it's more than a small percentage of it.

koliber 3 hours ago | parent | prev | next [-]

Sometimes you want an artistic vase that captures some essential element of beauty, culture, or emotion.

Sometimes you want a utilitarian teapot to reliably pour a cup of tea.

The materials and rough process for each can be very similar. One takes a master craftsman and a lot of time to make and costs a lot of money. The other can be made on a production line and the cost is tiny.

Both have are desirable, for different people, for different purposes.

With software, it's similar. A true master knows when to get it done quick and dirty and when to take the time to ponder and think.

bayindirh 3 hours ago | parent [-]

> Sometimes you want a utilitarian teapot to reliably pour a cup of tea.

If you pardon the analogy, watch how Japanese make a utilitarian teapot which reliably pours a cup of tea.

It's more complicated and skill-intensive than it looks.

In both realms, making an artistic vase can be simpler than a simple utilitarian tool.

AI is good at making (poor quality, arguably) artistic vases via its stochastic output, not highly refined, reliable tools. Tolerances on these are tighter.

koliber 2 hours ago | parent [-]

There is a whole range of variants in between those two "artistic vs utilitarian" points. Additionally, there is a ton of variance around "artistic" vs "utilitarian".

Artisans in Japan might go to incredible lengths to create utilitarian teapots. Artisans who graduated last week from a 4-week pottery workshop will produce a different kind quality, albeit artisan. $5.00 teapots from an East Asian mass production factory will be very different than high quality mass-produced upmarket teapots at a higher price. I have things in my house that fall into each of those categories (not all teapots, but different kinds of wares).

Sometimes commercial manufacturing produces worse tolerances than hand-crafting. Sometimes, commercial manufacturing is the only way to get humanly unachievable tolerances.

You can't simplify it into "always" and "never" absolutes. Artisan is not always nicer than commercial. Commercial is not always cheaper than artisan. _____ is not always _____ than ____.

If we bring it back to AI, I've seen it produce crap, and I've also seen it produce code that honestly impressed me (my opinion is based on 24 years of coding and engineering management experience). I am reluctant to make a call where it falls on that axis that we've sketched out in this message thread.

ibestvina 6 hours ago | parent | prev | next [-]

This makes no sense to me. There are plenty of artists out there (e.g. El Anatsui), not to mention whole professions such as architects, who do not interact directly with what they are building, and yet can have profound relationship with the final product.

Discovering the right problem to solve is not necessarily coupled to being "hands on" with the "materials you're shaping".

lolive 5 hours ago | parent | next [-]

In my company, [enterprise IT] architects are separated into two kinds. People with a CV longer than my arm who know/anticipate everything that could fail and have reached a level of understandind that I personnally call "wisdom". And theorists, who read books and norms, who focus mostly on the nominal case, and have no idea [and no interest] in how the real world will be a hard brick wall that challenges each and every idea you invent.

Not being hands-on, and more important not LISTENING to the hands-on people and learning from them, is a massive issue in my surroundings.

So thinking hard on something is cool. But making it real is a whole different story.

Note: as Steve used to say, "real artists ship".

darepublic 5 hours ago | parent | prev [-]

you think El Anatsui would concur that they didn't interact directly with what they were building? "hands on", "material you're shaping" is a metaphor

ibestvina 5 hours ago | parent [-]

I don't see why his involvement, explaining to his team how exactly to build a piece, is any different from a developer explaining to an LLM how to build a certain feature, when it comes to the level of "being hands on".

Obviously I am not comparing his final product with my code, I am simply pointing out how this metaphor is flawed. Having "workers" shape the material according to your plans does not reduce your agency.

skydhash 3 hours ago | parent [-]

> I don't see why his involvement, explaining to his team how exactly to build a piece, is any different from a developer explaining to an LLM

Because everyone under him knows that a mistake big enough is a quick way to unemployment or legal actions. So the whole team is pretty much aligned. A developer using an LLM may as well try to herd cats.

ibestvina 3 hours ago | parent [-]

First, that's quite a sad view of incentives structures. Second, you can't be serious in thinking that "worker worried they might be fired" puts the person in charge closer to the "materials" and more "hands on" with the project.

bodge5000 6 hours ago | parent | prev | next [-]

"The muse visits during the act of creation, not before. Start alone."

That has actually been a major problem for me in the past where my core idea is too simple, and I don't give "the muse" enough time to visit because it doesn't take me long enough to build it. Anytime I have given the muse time to visit, they always have.

isolli 6 hours ago | parent | prev | next [-]

This is very insightful, thanks. I had a similar thought regarding data science in particular. Writing those pandas expressions by hand during exploration means you get to know the data intimately. Getting AI to write them for you limits you to a superficial knowledge of said data (at least in my case).

darepublic 5 hours ago | parent | prev | next [-]

Thanks for the quote, it definitely resonates. Distressing to see many people who can't relate to this, taking it literally and arguing that there is nothing lost the more removed they are from the process.

anonymous344 10 hours ago | parent | prev | next [-]

yes, this is maybe it's my preference to jump directly to coding, instead of canva to draw the gui and stuff. i would not know what to draw because the involvemt is not so deep ...or something

leftbehinds 4 hours ago | parent | prev | next [-]

reminds of arguments for - hosting a server vs running stuff in cloud - vps vs containers

jatora 2 hours ago | parent | prev | next [-]

Yeah? And then you continue prompting and developing, and go through a very similar iterative process, except now it's faster and you get to tackle more abstract, higher level problems.

"Most developers don't know the assembly code of what they're creating. When you skip assembly you trade the very thing you could have learned to fully understand the application you were trying to make. The end result is a sad simulacrum of the memory efficiency you could have had."

This level of purity-testing is shallow and boring.

Bengalilol 5 hours ago | parent | prev | next [-]

I love Aral, he is so invested.

spacecadet 3 hours ago | parent | prev | next [-]

This is cute, but this is true for ALL activities in life. I have to constantly remind my brother that his job is not unique and if he took a few moments, he might realize, flipping burgers is also molding lumps of clay.

I think the biggest beef I have with Engineers is that for decades they more or less reduced the value of other lumps of clay and now want to throw up arms when its theirs.

logicprog 4 hours ago | parent | prev | next [-]

This is beautifully written, but as a point against agentic AI coding, I just don't really get it.

It seems to assume that vibe coding or like whatever you call the Gas Town model of programming is the only option, but you don't have to do that. You don't have to specify upfront what you want and then never change or develop that as you go through the process of building it, and you don't have to accept whatever the AI gives you on the other end as final.

You can explore the affordances of the technologies you're using, modify your design and vision for what you're building as you go; if anything, I've found AI coding mix far easier to change and evolve my direction because it can update all the various parts of the code that need to be updated when I want to change direction as well as keeping the tests and specification and documentation in sync, easily and quickly.

You also don't need to take the final product as a given, a "simulacrum delivered from a vending machine": build, and then once you've gotten something working, look at it and decide that it's not really what you want, and then continue to iterate and change and develop it. Again, with AI coding, I've found this easier than ever because it's easier to iterate on things. The process is a bit faster for not having to move the text around and looking up API documentation myself, even though I'm directly dictating the architecture and organization and algorithms and even where code should go most of the time.

And with the method I'm describing, where you're in the code just as much as the AI is, just using it to do the text/API/code munging, you can even let the affordances of not just the technologies, but the source code and programming language itself effect how you do this: if you care about the code quality and clarity and organization of the code that the AI is generating, you'll see when it's trying to brute force its way past technical limitations and instead redirect it to follow the grain. It just becomes easier and more fluid to do that.

If anything, AI coding in general makes it easier to have a conversation with the machine and its affordances and your design vision and so on, then before because it becomes easier to update everything and move everything around as your ideas change.

And nothing about it means that you need to be ignorant of what's going on; ostensibly you're reviewing literally every line of code it creates and deciding what libraries and languages as well as the architecture, organization and algorithms it's using. You are aren't you? So you should know everything you need to know. In fact, I've learned several libraries and a language just from watching it work, enough that I can work with them without looking anything up, even new syntax and constructs that would have been very unfamiliar prior on my manual coding days.

moron4hire 4 hours ago | parent | prev | next [-]

I have no idea who this guy is (I guess he's a fantasy novelist?) but this video came up in my YouTube feed recently and feels like it matches closely with the themes you're expressing. https://youtu.be/mb3uK-_QkOo?si=FK9YnawwxHLdfATv

boredtofears 10 hours ago | parent | prev | next [-]

I dunno, when you've made about 10,000 clay pots its kinda nice to skip to the end result, you're probably not going to learn a ton with clay pot #10,001. You can probably come up with some pretty interesting ideas for what you want the end result to look like from the onset.

I find myself being able to reach for the things that my normal pragmatist code monkey self would consider out of scope - these are often not user facing things at all but things that absolutely improve code maintenance, scalability, testing/testability, or reduce side effects.

belZaah 9 hours ago | parent | next [-]

Depends on the problem. If the complexity of what you are solving is in the business logic or, generally low, you are absolutely right. Manually coding a signup flow #875 is not my idea of fun either. But if the complexity is in the implementation, it’s different. Doing complex cryptography, doing performance optimization or near-hardware stuff is just a different class of problems.

aleph_minus_one 7 hours ago | parent | next [-]

> If the complexity of what you are solving is in the business logic or, generally low, you are absolutely right.

The problem is rather that programmers who work on business logic often hate programmers who are actually capable of seeing (often mathematical) patterns in the business logic that could be abstracted away; in other words: many business logic programmers hate abstract mathematical stuff.

So, in my opinion/experience this is a very self-inflected problem that arises from the whole culture around business logic and business logic programming.

skydhash 3 hours ago | parent | prev | next [-]

Coding signup flow #875 should as easy as using a snippet tool or a code generator. Everyone that explains why using an LLM is a good idea always sound like living in the stone age of programming. There are already industrial level tools to get things done faster. Often so fast that I feel time being wasted describing it in english.

boredtofears 16 minutes ago | parent [-]

Of course I use code generation. Why would that be mutually exclusive from AI usage? Claude will take full advantage of it with proper instruction.

boredtofears 9 hours ago | parent | prev [-]

In my experience AI is pretty good at performance optimizations as long as you know what to ask for.

Can't speak to firmware code or complex cryptography but my hunch is if it's in it's training dataset and you know enough to guide it, it's generally pretty useful.

kranner 8 hours ago | parent | next [-]

> my hunch is if it's in it's training dataset and you know enough to guide it, it's generally pretty useful.

Presumably humanity still has room to grow and not everything is already in the training set.

aleph_minus_one 7 hours ago | parent | prev [-]

> In my experience AI is pretty good at performance optimizations as long as you know what to ask for.

This rather tells that the kind of performance optimizations that you ask for are very "standard".

charcircuit 6 hours ago | parent | next [-]

Most optimizations are making sure you do not do work that is unnecessary or that you use the hardware effectively. The standard techniques are all you need 99% of the time you are doing performance work. The hard part about performance is dedicating the time towards it and not letting it regress as you scale the team. With AI you can have agents constantly profiling the codebase identifying and optimizing hotspots as they get introduced.

aleph_minus_one 3 hours ago | parent [-]

> Most optimizations are making sure you [...] use the hardware effectively.

If you really care about using the hardware effectively, optimizing the code is so much more than what you describe.

boredtofears 34 minutes ago | parent | prev [-]

As most are

bravetraveler 8 hours ago | parent | prev [-]

import claypot

trillion dollar industry boys

mlvljr 6 hours ago | parent [-]

[dead]

CamperBob2 10 hours ago | parent | prev [-]

Eloquent, moving, and more-or-less exactly what people said when cameras first hit the scene.

sonofhans 8 hours ago | parent | next [-]

Ironic. The frequency and predictability of this type of response — “This criticism of new technology is invalid because someone was wrong once in the past about unrelated technology” — means there might as well be an LLM posting these replies to every applicable article. It’s boring and no one learns anything.

It would be a lot more interesting to point out the differences and similarities yourself. But then if you wanted an interesting discussion you wouldn’t be posting trite flamebait in the first place, would you?

hackable_sand 3 hours ago | parent [-]

Note that we still have not solved cameras or even cars.

The biggest lesson I am learning recently is that technologists will bend over backwards to gaslight the public to excuse their own myopia.

dwrolvink 8 hours ago | parent | prev | next [-]

Interesting comparison. I remember watching a video on that. Landscape paintings, portraits, etc, was an art that has taken an enormous nosedive. We, as humans, have missed out on a lot of art because of the invention of the camera. On the other hand, the benefits of the camera need no elaboration. Currently AI had a lot of foot guns though, which I don't believe the camera had. I hope AI gets to that point too.

pixl97 an hour ago | parent | next [-]

>We, as humans, have missed out on a lot of art because of the invention of the camera.

I so severely doubt this to the point I'd say this statement is false.

As we go toward the past art was expensive and rare. Better quality landscape/portraits were exceptionally rare and really only commissioned by those with money, which again was a smaller portion of the population in the time before cameras. It's likely there are more high quality paintings now per capita than there were ever in the past, and the issue is not production, but exposure to the high quality ones. Maybe this is what you mean by 'miss out'?

In addition the general increase in wealth coupled with the cost of art supplies dropping this opens up a massive room for lower quality art to fill the gap. In the past canvas was typically more expensive so sucky pictures would get painted over.

jack_pp 6 hours ago | parent | prev [-]

The footgun cameras had was exposure time.

1826 - The Heliograph - 8+ hours

1839 - The Daguerreotype - 15–30 Mins

1841 - The Calotype - 1–2 Mins

1851 - Wet Plate Collodion - 2–20 Secs

1871 - The Dry Plate - < 1 Second.

So it took 45 years to perfect the process so you could take an instant image. Yet we complain after 4 years of LLMs that they're not good enough.

AdieuToLogic 9 hours ago | parent | prev | next [-]

> Eloquent, moving, and more-or-less exactly what people said when cameras first hit the scene.

This is a non sequitur. Cameras have not replaced paintings, assuming this is the inference. Instead, they serve only to be an additional medium for the same concerns quoted:

  The process, which is an iterative one, is what leads you 
  towards understanding what you actually want to make, 
  whether you were aware of it or not at the beginning.
Just as this is applicable to refining a software solution captured in code, just as a painter discards unsatisfactory paintings and tries again, so too is it when people say, "that picture didn't come out the way I like, let's take another one."
williamcotton 2 hours ago | parent | next [-]

Photography’s rapid commercialisation [21] meant that many painters – or prospective painters – were tempted to take up photography instead of, or in addition to, their painting careers. Most of these new photographers produced portraits. As these were far cheaper and easier to produce than painted portraits, portraits ceased to be the privilege of the well-off and, in a sense, became democratised [22].

Some commentators dismissed this trend towards photography as simply a beneficial weeding out of second-raters. For example, the writer Louis Figuier commented that photography did art a service by putting mediocre artists out of business, for their only goal was exact imitation. Similarly, Baudelaire described photography as the “refuge of failed painters with too little talent”. In his view, art was derived from imagination, judgment and feeling but photography was mere reproduction which cheapened the products of the beautiful [23].

https://www.artinsociety.com/pt-1-initial-impacts.html#:~:te...

CamperBob2 9 hours ago | parent | prev [-]

Cameras have not replaced paintings, assuming this is the inference.

You wouldn't have known that, going by all the bellyaching and whining from the artists of the day.

Guess what, they got over it. You will too.

lkey 8 hours ago | parent | next [-]

What stole the joy you must have felt, fleetingly, as a child that beheld the world with fresh eyes, full of wonder?

Did you imagine yourself then, as your are now, hunched over a glowing rectangle. Demanding imperiously that the world share your contempt for the sublime. Share your jaundiced view of those that pour the whole of themselves into the act of creation, so that everyone might once again be graced with wonder anew.

I hope you can find a work of art that breaks you free of your resentment.

ceuk 7 hours ago | parent | next [-]

Thank you for brightening my morning with a brief moment of romantic idealism in a black ocean of cynicism

kuerbel 7 hours ago | parent | prev [-]

Love your comment.

I took the liberty of pasting it to chatgpt and asked it to write another paragraph in the same style:

Perhaps it is easier to sneer than to feel, to dull the edges of awe before it dares to wound you with longing. Cynicism is a tidy shelter: no drafts of hope, no risk of being moved. But it is also a small room, airless, where nothing grows. Somewhere beyond that glowing rectangle, the world is still doing its reckless, generous thing—colors insisting on being seen, sounds reaching out without permission, hands shaping meaning out of nothing. You could meet it again, if you chose, not as a judge but as a witness, and remember that wonder is not naïveté. It is courage, practiced quietly.

balamatom 4 hours ago | parent | next [-]

Thank you for the AI warning, so I didn't have to read that.

kuerbel 2 minutes ago | parent [-]

Ah well, I'm neurodivergent and it’s challenging for me to write a comment while remembering that others don’t have access to my thoughts and might interpret things differently. And it's too late to edit it now

What I wanted to show is that, clearly different from a camera or other devices, AI can copy originality. OPs comment was pretty original in it's wording, and gpt came pretty close imo. It really wasn't meant as a low effort comment

exodust 7 hours ago | parent | prev [-]

Plot twist. The comment you love is the cynical one, responding to someone who clearly embraces the new by rising above caution and concern. Your GPT addition has missed the context, but at least you've provided a nice little paradox.

AdieuToLogic 8 hours ago | parent | prev | next [-]

>> Cameras have not replaced paintings, assuming this is the inference.

> You wouldn't have known that, going by all the bellyaching and whining from the artists of the day.

> Guess what, they got over it.

You conveniently omitted my next sentence, which contradicts your position and reads thusly:

  Instead, they serve only to be an additional medium for the 
  same concerns quoted ...
> You will too.

This statement is assumptive and gratuitous.

CamperBob2 8 hours ago | parent [-]

Username checks out, at least.

AdieuToLogic 7 hours ago | parent | next [-]

> Username checks out, at least.

Thoughtful retorts such as this are deserving of the same esteem one affords the "rubber v glue"[0] idiom.

As such, I must oblige.

0 - https://idioms.thefreedictionary.com/I%27m+rubber%2c+you%27r...

salawat 8 hours ago | parent | prev [-]

Logic needs to be shown the door on occasion. Sometimes via the help of an ole Irish bar toss.

kranner 8 hours ago | parent | prev [-]

> Guess what, they got over it. You will too.

Prediction is difficult, especially of the future.

cjohnson318 8 hours ago | parent | prev | next [-]

Yeah, and cameras changed art forever.

exodust 7 hours ago | parent [-]

people still make clay pots and paint landscapes

navigate8310 5 hours ago | parent [-]

Creativity is not what would expect out of the Renaissance

vermilingua 9 hours ago | parent | prev [-]

Source?

CamperBob2 9 hours ago | parent [-]

Art history. It's how we ended up with Impressionism, for instance.

People felt (wrongly) that traditional representational forms like portraiture were threatened by photography. Happily, instead of killing any existing genres, we got some interesting new ones.