Remix.run Logo
Grief and the AI split(blog.lmorchard.com)
177 points by avernet 15 hours ago | 286 comments
oytis an hour ago | parent | next [-]

I think we should already get past pretending it's about people who just like typing words on their stupid mechanical keyboards. The real split is whether you like understanding systems and inventing new things or whether you are OK to delegate this part to someone else and are just happy to take credit for their success. With a small note that when someone else is a human, the credit can be justified if you mentored them or created conditions for their success and growth.

aspenmartin a minute ago | parent | next [-]

How about people that understand things are changing whether anyone likes it or not and want to stay relevant. What about the people who care about the end product and not rabbitholing design decisions on a proof of concept. What about someone who understands there is more nuance than assuming people with a different perspective on AI are lesser than or lower than people who resist the technology. You may feel you know the “right way” but to everyone else who is interested in operating in a world changing beneath our feet and not whining about the fact that everything will be different, and denigrating the people who want to succeed in it, this opinion is not exactly convincing. You want to cludge your way through a problem you’re welcome to but it’s not necessarily logical to suggest this is the only “right” way and infer that people who build with AI don’t like “understanding systems”.

When I build with AI I build things I never would have built before, and in doing so I’m exposed to technologies, designs, tools I wasn’t aware of before. I ask questions about them. Sure I don’t understand the tools as deeply as the person who wasted like 10 hours going down rabbit holes to answer a simple question, but I don’t really see that as particularly valuable.

rdevilla 4 minutes ago | parent | prev | next [-]

"Claude, lift these weights for me."

adverbly 42 minutes ago | parent | prev | next [-]

This says nothing about where people find enjoyment.

I like doing puzzles.

I like it more than planning.

At the end of the day, I'll do whatever builds the best thing, but I'll enjoy it more or less depending on what that involves.

throw310822 an hour ago | parent | prev [-]

Disagree. I think it was always obvious to me that there are at two kinds of developers. To make an extreme example: developer A writes long, sometimes tedious, security-minded, thoroughly tested code, and has written the CI pipelines too. When tasked with some ticket, they'll develop it to the letter, not one inch further, and even if it makes zero sense from the point of view of the users. Developer B knows nothing of that, doesn't write tests, can't be arsed about security and has no idea of how to deploy stuff, but thinks backwards from what the users (or other developers, or their future self) might like a lot and tries to make that. Both have been useful, though the first kind usually much more appreciated (maybe because it's really essential, while type B's contributions are harder to measure).

Probably AI has come a little bit earlier for type A, but type B will follow soon anyway. In the meanwhile, they're enjoying the ride a bit more since AI takes care of all the tedious but essential details.

wolvesechoes 5 hours ago | parent | prev | next [-]

The real split is between people that believe technological progress is good by itself and by the law of nature it always makes life better and easier, and people that know the history and know that stuff like 8 hours workday wasn't spat out of steam machine - it had to be fought for through political struggle, because actual "natural" consequence of increased productivity was increase in workload.

Gud 5 hours ago | parent | next [-]

The real split is between the capital owners, who live on our labour, typically through inheritance of a piece of paper that says they own a percentage of what I make.

khafra 4 hours ago | parent [-]

Whether the labor theory of value is right or wrong, the "real split" you describe will soon no longer exist. Capital owners will live on the labor of their capital. Non-capital-owners will live on the largesse of capital, or will not live at all.

Unless we muster the political will to stop AI development, internationally, until we can be certain of our ability to durably imbue it with the intrinsic desire to keep humans around, doing human things.

snek_case 9 minutes ago | parent | next [-]

> imbue it with the intrinsic desire to keep humans around, doing human things.

It's not the AI you have to convince, it's your government and the people running tech companies. Dario Amodei was cheering for AI to take all programming jobs (along with the others). If that happened, it would be an unmitigated disaster for millions of people. Imagine a student who comes out of a CS major with tons of student debt. How much sympathy does Dario feel for this person? Getting him to STFU would be a good first step.

> the political will to stop AI development

The reason that's not likely is that it's an arms race. You stop AI research here, but how can you trust that China and Russia are doing the same? Unlike nuclear bombs, the potential harms are less tangible.

grafmax 2 hours ago | parent | prev | next [-]

Capital is a commodity, just like a business' product. It does not produce value. Labor does. This is a central point of LTV!

We witnessed the same thing with looms and other automation in the Industrial Revolution. Capital that helps you produce more. But owners faced with increased competition under commoditized production see their profit margins fall. Thus they will turn to squeezing workers - the source of value - for profit in the newly commoditized landscape - exactly what happened during the Industrial Revolution. It was only when workers got their act together and organized that this decline was stopped and reversed.

chrisvalleybay an hour ago | parent | prev | next [-]

I think there's a piece missing here. Capital owners are humans too, and what humans want (perhaps especially the ones who accumulate capital), is to be at the top of a hierarchy. But a hierarchy needs participants. If nobody else is playing the game, there's no top to be on top of. Strip away the people willing to compete, admire, envy, or just show up, and the whole structure collapses. It's not clear that a world of pure capital-on-AI-labor actually gives them what they're after. It sounds lonely and meaningless to me. I don't think that it would feed the black hole in their chests.

wolvesechoes 30 minutes ago | parent [-]

Lot of effort was spent to naturalize the current state of affairs and value system, even if there is nothing natural and obvious about it. Humans for millennia have lived with much higher political and social flexibility, with hierarchies built and teared down even seasonally, or with role of property and wealth shifting back and forth.

Of course the structure exists because we allow it, that's the easy part. Hard part is - why do we allow it?

chrisvalleybay 18 minutes ago | parent [-]

I think in part because we have a black hole in our chest, and we are searching for ways to fill it. We attempt to fill it through worship at the altar of materialism, celebrity, etc. We are doing this to quiet the roar from the black hole. Actually stepping away would require us to sit with stillness, and then to forge a new path, a new life. It's frightening.

eloisius 2 hours ago | parent | prev | next [-]

I, for one, am looking forward to me and a band of my closest friends and family raiding heavily fortified data centers guarded by Boston Dynamics robot dogs to steal clean drinking water for our underground village. We might even hit a caravan of autonomous trucks carrying cricket protein powder in the same night.

sweetheart 2 hours ago | parent [-]

can i come

edgyquant 2 hours ago | parent | prev | next [-]

The people without capital will just form their own economies and continue to exist, likely they will kill the capital owners as well if it really came to that.

khafra 2 hours ago | parent [-]

What's your plan for beating the autonomous drone swarms without capital?

forgetfreeman 3 hours ago | parent | prev | next [-]

"Non-capital-owners will live on the largesse of capital, or will not live at all."

That's been tried several times now and has a tendency to end very badly for capital. You'd think folks with even a grade school level of historical literacy would know better than to stick a fork in that outlet.

khafra 2 hours ago | parent [-]

It's never been tried before. Capital always required human labor, to be productive. Capital has never closed in on the ability to operate, maintain, defend, and expand itself without human assistance, as it is closing in on that ability now.

daveguy an hour ago | parent [-]

It's really not. The capital owners just think it is.

khafra 27 minutes ago | parent | next [-]

We'd all be a lot safer--even the capital owners--if today's robotics and multimodal intelligence were near the ceiling of what's possible, or even near the bend in the logistic curve where things slow down a lot.

I haven't seen evidence of that. I see evidence of rapid advances in task length, general capabilities, and research and development capabilities in AI, and generality, price, and autonomy in robotics.

How much headroom in these capabilities do you believe we have, before a data center can protect and maintain itself and an on-site power plant? Before robots can run a robot factory?

gom_jabbar 17 minutes ago | parent | prev [-]

The real transition would be from human-owned capital to self-owned capital. You are right that current capabilities and autonomy don't allow for that yet.

Gud 4 hours ago | parent | prev [-]

I agree.

A_Duck 2 hours ago | parent | prev [-]

Interesting to see more of this thinking on Hacker News

Perhaps one of the secondary effects of AI replacing developers will be mobilising a group of smart, motivated people to the left

(It's always interesting to think of the secondary effects which kick in past a certain point of growth. High-multiple stock valuations often fail to take these into account. For the East India Company, for example — your company can keep growing until it's the size of a country. But suddenly other countries treat you as a foreign power rather than a pet.)

wolvesechoes 42 minutes ago | parent [-]

> Interesting to see more of this thinking on Hacker News

I am on this site because it is one of the less shitty places on the Internet (in terms of usability, privacy etc.) to have some form of discussion, but I never identified as a "hacker", "techie", "entrepreneur" or "temporarily embarrassed billionaire". AI didn't change my view on anything, except it has shown me how blind and naive people can be.

Of course I tend to focus on aspects that are being discussed here (context of software engineering).

wiml 13 hours ago | parent | prev | next [-]

I think the article misunderstands completely. "Craft" coders are chasing results too — we're just chasing results that last and that can be built upon. I've been in this game for a while, and a major goal of every single good programmer I've known has been to make themselves obsolete. Yes, I enjoyed meticulous hand crafted assembly, counting cycles and packing bits, but nobody had to talk me into using compilers. Yes, I've spent many fruitful hours writing basic CRUD apps but now that's easily done by libraries/frameworks I'm not eager to go back. Memory management, type systems, higher level languages, no-/low-code systems that completely remove me from some parts of the design loop, etc etc etc. All great: the point of computer programming is to have the computer do things so we don't have to.

I think the real divide we're seeing is between people who saw software as something that is, fundamentally, improvable and understandable; and people who saw it as a mysterious roadblock foisted upon them by others, that cannot really be reasoned about or changed. And oddly, many of the people in the second category use terminology from the first, but fundamentally do not believe that the first category really exists. (Fair enough; I was surprised at the second category.) It's not about intelligence or whatever, it's a mindset or perspective thing.

randomNumber7 6 hours ago | parent | next [-]

> It's not about intelligence or whatever, it's a mindset or perspective thing.

I agree with everything except this last sentence. What you wrote looks highly intelligent and I would suspect a lot of people in the second camp are not up to par with this.

simianwords 6 hours ago | parent | prev | next [-]

You are repeating the same thing. You think having good maintainable good is important - more than the first camp.

That does not mean you are correct. This mindset is useful only in serious reusable libraries and open source tools. Most enterprise code involves lots of exploring and fast iteration. Code quality doesn’t matter that much. No one else is going to see it.

When the craft coders bring their ideology to this set up, it starts slowing things down because they are optimising for the wrong target.

wiseowise 5 hours ago | parent | next [-]

> Code quality doesn’t matter that much. No one else is going to see it.

This is just false for anyone who has worked in the industry for any meaningful amount of time. Do you seriously never encountered a situation where a change was supposedly easy on the surface, but some stupid SoB before you wrote it so bad that you want to pull your hair out from trying to make it work without rewriting this crap codebase from scratch?

simianwords 5 hours ago | parent [-]

at least where i have worked, you need to identify the context. certain projects require good readable code and certain projects require you to iterate fast and explore.

in my experience very few projects were serious enough that required such scrutiny in code.

FuckButtons 4 hours ago | parent [-]

Sounds like you’ve never had a prototype become foundational infrastructure before, or dealt with someone else’s.

edgyquant 2 hours ago | parent | next [-]

I have many times and if you spend too long over architecting a prototype you start to get annoyed looks and tons of questions from PMs who just want something that looks right today (we can fix it/optimize it later)

simianwords 3 hours ago | parent | prev [-]

you can always change it later. this is exactly the dogmatism i'm speaking about - you need to prioritise pushing things. the clean up can come later.

ironically it is your camp that advices to not use microservices but start with monolith. that's what i'm suggesting here.

discreteevent an hour ago | parent | next [-]

> You can always change it later.

People seem to think that technical debt doesn't need to be paid back for ages. In my experience bad code starts to cost more than it saved after about three months. So if you have to get a demo ready right now that will save the company then hack it in. But that's not the case for most technical debt. In most cases the management just want the perception of speed so they pile debt upon debt. Then they can't figure out why delivery gets slower and slower.

> ironically it is your camp that advices to not use microservices but start with monolith. that's what i'm suggesting here.

I agree with this. But there's a difference between over-engineering and hacking in bad quality code. So to be clear, I am talking about the latter.

skydhash an hour ago | parent | prev [-]

> you can always change it later. this is exactly the dogmatism i'm speaking about - you need to prioritise pushing things. the clean up can come later.

Everyone that says this has not been the one that had to fix the code later. They have already moved to the next jobs (or have been fired). Engineers do know the tradeoff between quality and speed, and can do hack if that’s what needed to get the project to the finish line. But good ones will note down the hack and resolve it later. Bad ones will pat themselves in the back and add more hacks on top of that.

coffeefirst 33 minutes ago | parent [-]

Also it takes 40 bug fixes before anyone throws off their hands and says “screw it, we’re starting over.”

In no way is this carelessness faster in the long term.

suddenlybananas 6 hours ago | parent | prev [-]

I think your target is the wrong target myself. Now what?

simianwords 6 hours ago | parent [-]

If more people think like you we won’t have jobs because company won’t make profit

wiseowise 5 hours ago | parent [-]

If people think like you we won’t have jobs because everyone would fucking die when cars, MRI machines, nuclear power plans and ICBMs, airplanes, infra, payments start misbehaving. Now what?

simianwords 5 hours ago | parent [-]

this is a category error that i specifically called out in my comment.

oytis an hour ago | parent | next [-]

What is the category of code that does not need quality? You need it to not interact with real world, with people's finances, with people's personal data. Basically it's the code that only exists for PMs to show to investors (in startups) and VPs (in enterprise), but not for real users to rely on.

aleph_minus_one 14 minutes ago | parent [-]

> What is the category of code that does not need quality?

For example there exist "applications"/"demos" that exist "to show the customer what could be possible if they hire 'us'". These demos just have to survive a, say, intense two-hour marketing pitch and some inconvenient questions/tests that someone in the audience might come up with during these two hours.

In other words: applications for "pitching possibilities" to a potential customer, where everything is allowed to be smoke and mirrors if necessary (once the customer has been convinced with all tricks to hire the respective company for the project, the requirements will completely change anyway ...).

oytis 6 minutes ago | parent [-]

Yeah, that's what I mean - prototypes. The caveat is though that before agentic coding skills to build a prototype and skills to build a production system were generally the same, so a prototype did not only provide a demonstration of what is possible in general, but what your team of engineers can do specifically. Now these skills will diverge, so prototypes will not prove anything like that. They are still going to be useful for demonstrations and market research though.

wiseowise 4 hours ago | parent | prev [-]

Where?

> That does not mean you are correct. This mindset is useful only in serious reusable libraries and open source tools. Most enterprise code involves lots of exploring and fast iteration. Code quality doesn’t matter that much. No one else is going to see it.

Here? Most of those that I’ve listed IS boring enterprise code. Unless we’re taking medical/military grade.

simianwords 3 hours ago | parent [-]

fair, you have presented specific niche where the ~quality~ correctness is important in enterprise - not just libraries.

but most people aren't writing code in those places. its usually CRUD, advertisement, startups, ecommerce.

also there are two things going on here:

- quality of code

- correctness of code

in serious reusable libraries and opensource tools, quality of code matters. the interfaces, redundancy etc.

but that's not exactly equal to correctness. one can prioritise correctness without dogmatism in craft like clean code etc.

in most of these commercial contexts like ecommerce, ads - you don't need the dogmatism that the craft camp brings. that's the category error.

skydhash an hour ago | parent [-]

Maybe you’re too entrenched in the web section of software development. Be aware that there’s a lot of desktop and system software out there.

Even in web software, you can write good code without compromising in delivery speed. That just requires you to be good at what you’re doing. But the web is more forgiving of mistakes and a lot of frameworks have no taste at all.

simianwords 28 minutes ago | parent [-]

Do you think more sdes work in mission critical software or the ones I mentioned?

wiseowise 5 hours ago | parent | prev [-]

> and a major goal of every single good programmer I've known has been to make themselves obsolete.

I’ve always heard this mantra when coders were thinking they’re untouchable, not so much now.

ernst_klim 11 minutes ago | parent | prev | next [-]

> Before AI, both camps were doing the same thing every day. Writing code by hand.

I would argue that the split existed before AI and these camps were not the same.

There were always "Quality first" people and "Get the shit done ASAP" people. Former would go for a better considerations, more careful attitude towards dependencies. Latter would write the dirty POC code and move on, add huge 3rd party libs for one small function and so on.

Both have pros and cons. Former are better in envs like Aerospace or Medtech, latter would thrive in product companies and web. The second cathegory are the people who are happy the most about AI and who would usually delegate the whole thing to the agents from start to finish including the review and deployment.

simonw 14 hours ago | parent | prev | next [-]

This sounds right to me:

> Before AI, both camps were doing the same thing every day. Writing code by hand. Using the same editors, the same languages, the same pull request workflows. The craft-lovers and the make-it-go people sat next to each other, shipped the same products, looked indistinguishable. The motivation behind the work was invisible because the process was identical.

Helps explain why some people are delighted to have AI write code for them while others are unhappy that the part they enjoyed so much has been greatly reduced.

Similar note from Kellan (a clear member of the make-it-go group) in https://laughingmeme.org/2026/02/09/code-has-always-been-the... :

> That feeling of loss though can be hard to understand emotionally for people my age who entered tech because we were addicted to feeling of agency it gave us. The web was objectively awful as a technology, and genuinely amazing, and nobody got into it because programming in Perl was somehow aesthetically delightful.

rudedogg 14 hours ago | parent | next [-]

I think the real divide is over quality and standards.

We all have different thresholds for what is acceptable, and our roles as engineers typically reflect that preference. I can grind on a single piece of code for hours, iterating over and over until I like the way it works, the parameter names, etc.

Other people do not see the value in that whatsoever, and something that works is good enough. We both are valuable in different ways.

Also, theres the pace of advancement of the models. Many people formed their opinions last year, and the landscape has changed a lot. There’s also some effort requires in honing your skill using them. The “default” output is average quality, but with some coaxing higher quality output is easily attained.

I’m happy people are skeptical though, there are a lot of things that do require deep thought, connecting ideas in new ways, etc., and LLMs aren’t good at that in my experience.

allenu 13 hours ago | parent | next [-]

> I think the real divide is over quality and standards.

I think there are multiple dimensions that people fall on regarding the issue and it's leading to a divide based on where everyone falls on those dimensions.

Quality and standards are probably in there but I think risk-tolerance/aversion could be behind some how you look at quality and standards. If you're high on risk-taking, you might be more likely to forego verifying all LLM-generated code, whereas if you're very risk-averse, you're going to want to go over every line of code to make sure it works just right for fear of anything blowing up.

Desire for control is probably related, too. If you desire more control in how something is achieved, you probably aren't going to like a machine doing a lot of the thinking for you.

aleph_minus_one 8 minutes ago | parent | next [-]

I think it's a little bit more complicated.

I, for example, would claim to be rather risk-tolerant, but I (typically) don't like AI-generated code.

The solution to the paradox this creates if one considers the model of your post is simple:

- I deeply love highly elegant code, which the AI models do not generate.

- I cannot stand people (and AIs) bullshitting me; this makes me furious. I thus have an insanely low tolerance for conmen (and conwomen and conAIs).

bandrami 6 hours ago | parent | prev [-]

This. My aversion to LLMs is much more that I have low risk tolerance and the tails of the distribution are not well-known at this point. I'm more than happy to let others step on the land mines for me and see if there's better understanding in a year or two.

XenophileJKO 5 hours ago | parent [-]

I think there is more to it than that.

I am a high quality/craftsmanship person. I like coding and puzzling. I am highly skilled in functional leaning object oriented deconstruction and systems design. I'm also pretty risk averse.

I also have always believed that you should always be "sharpening your axe". For things like Java delelopment or things where I couldn't use a concise syntax would make extensive use of dynamic templating in my IDE. Want a builder pattern, bam, auto-generated.

Now when LLMs came out they really took this to another level. I'm still working on the problems.. even when I'm not writing the lines of code. I'm decomposing the problems.. I'm looking at (or now debating with the AI) what is the best algorithm for something.

It is incredibly powerful.. and I still care about the structure.. I still care about the "flow" of the code.. how the seams line up. I still care about how extensible and flexible it is for extension (based on where I think the business or problem is going).

At the same time.. I definately can tell you, I don't like migrating projects from Tensorflow v.X to Tenserflow v.Y.

skydhash an hour ago | parent [-]

> I'm looking at (or now debating with the AI) what is the best algorithm for something.

That line always makes me laugh. There’s only 2 points of an algorithm, domain correctness and technical performance. For the first, you need to step out of the code. And for the second you need proofs. Not sure what is there to debate about.

bigstrat2003 6 hours ago | parent | prev | next [-]

> Also, theres the pace of advancement of the models. Many people formed their opinions last year, and the landscape has changed a lot.

People have been saying this every year for the last 3 years. It hasn't been true before, and it isn't true now. The models haven't actually gotten smarter, they still don't actually understand a thing, and they still routinely make basic syntax and logic errors. Yes, even (insert your model of choice here).

The truth is that there just isn't any juice to squeeze in this tech. There are a lot of people eagerly trying to get on board the hype train, but the tech doesn't work and there's no sign in sight that it ever will.

cableshaft 6 hours ago | parent | next [-]

All I know is it feels very different using it now then it did a year ago. I was struggling to get it to do anything too useful a year ago, just asking it to do a small function here or there, often not being totally satisfied with the results.

Now I can ask an agent to code a full feature and it has been handling it more often than not, often getting almost all of the way there with just a few paragraphs of description.

domlebo70 6 hours ago | parent | prev | next [-]

Maybe I'm solving different problems to you, but I don't think I've seen a single "idiot moment" from Claude Code this entire week. I've had to massage things to get them more aligned with how I want things, but I don't recall any basic syntax or logic errors.

coffeebeqn 22 minutes ago | parent | next [-]

With the better harness in Claude code and the >4.5 model and a somewhat thought out workflow we’ve definitely arrived at a point where I find it very helpful. The less you can rely on one-shot and more give meaningful context and a well defined testable goal the better it is. It honestly does make me worry how much better can it get and will some percentage of devs become obsolete. It requires less hand holding than many people I’ve worked with and the results come out 100x faster

smackeyacky 4 hours ago | parent | prev [-]

I saw a few (Claude Sonnet 4.6), easily fixed. The biggest difference I’ve noticed is that when you say it has screwed up it much less likely to go down a hallucination path and can be dragged back.

Having said that, I’ve changed the way I work too: more focused chunks of work with tight descriptions and sample data and it’s like having a 2nd brain.

domlebo70 an hour ago | parent [-]

Very good way to describe it. I am enjoying Opus a lot.

swader999 3 hours ago | parent | prev [-]

And yet I just eliminated three months (easily) of tech debt on our billing system in the past two weeks.

enraged_camel 13 hours ago | parent | prev [-]

I think this is a false dichotomy because which approach is acceptable depends heavily on context, and good engineers recognize this and are capable of adapting.

Sometimes you need something to be extremely robust and fool-proof, and iterating for hours/days/weeks and even months might make sense. Things that are related to security or money are good examples.

Other times, it's much more preferable to put something in front of users that works so that they start getting value from it quickly and provide feedback that can inform the iterative improvements.

And sometimes you don't need to iterate at all. Good enough is good enough. Ship it and forget about it.

I don't buy that AI users favor any particular approach. You can use AI to ship fast, or you can use it to test, critique, refactor and optimize your code to hell and back until it meets the required quality and standards.

kaffekaka 7 hours ago | parent [-]

Yes, it is a false dichotomy but describes a useful spectrum. People fall on different parts of the spectrum and it varies between situations and over time as well. It can remind one that it is normal to feel different from other people and different from what one felt yesterday.

dale_glass 2 hours ago | parent | prev | next [-]

> The web was objectively awful as a technology, and genuinely amazing, and nobody got into it because programming in Perl was somehow aesthetically delightful.

As an old school Perl coder, not true. Lots of people had a taste for Perl. TIMTOWTDI was sold as an actual advantage.

Perl caters to things almost nobody else does, like the way you have a negative "if" in "unless" and placing conditions after the code. So you can do things like:

    frobnicate() unless ($skip_frobnicating);
Which is sure, identical function-wise to:

     if (!$skip_frobnicating) frobnicate();
But is arguably a bit nicer to read. The first way you're laying out the normal flow of the program first of all, and then tacking on a "we can skip this if we're in a rare special mode" afterwards. Used judiciously I do think there's a certain something in it.

The bigger problem with Perl IMO is that it started as a great idea and didn't evolve far enough -- a bunch of things had to be tacked on, and everyone tacked on them slightly differently for no real benefit, resulting in codebases that can be terribly fragile for no good reason and no benefit.

appreciatorBus 13 hours ago | parent | prev | next [-]

> nobody got into it because programming in Perl was somehow aesthetically delightful.

To this day I remember being delighted by Perl on a regular basis. I wasn't concerned with the aesthetics of it, though I was aware it was considered inscrutable and that I could read & write it filled me with pride. So yea, programming Perl was delightful.

sonofhans 5 hours ago | parent [-]

Yes, this is what I thought, too. I did program in Perl because it was beautiful. No other computer language compares so favorably with human language, including in its ambiguity. Not everyone considers this a good feature :)

suzzer99 13 hours ago | parent | prev | next [-]

Enjoying something and getting satisfaction out of it are two different things. I don't enjoy the act of coding. But I enjoy the feeling when I figure something out. I also think that having to solve novel puzzles as part of my job helps preserve my brain plasticity as I age. I'm not sure I'll get either of those from claude.

cableshaft 6 hours ago | parent [-]

> I also think that having to solve novel puzzles as part of my job helps preserve my brain plasticity as I age.

Yeah, this is a concern. I remember when I took a break from coding to work as a video game producer for a couple of years and I felt like my ability to code was atrophying and that drove me nuts.

Now I'm not so sure. There's just so much dumb garbage that's accumulated around coding nowadays, especially with web dev. So much time just gluing together or puzzling out how to get APIs or libraries to works and play nice together.

It's not like back in the days where it was relatively simple with PHP and HTML, when I first started. Much less you could do back then, sure, but expectations were a lot lower back then as well.

I might just content myself with doing Sudoku or playing/designing board games to help keep that brain plasticity going, and stop fighting so hard to understand all this junk. Or finally figure out how to get half-decent at shader math or something, at least that seems less trivial and ephemeral.

XorNot 4 hours ago | parent [-]

Everytime I've had to do any any webdev, I've usually just been frustrated by the fact that there's a vision of how powerful any particular architecture should be, and then all the confusion and boilerplate to try and get it there.

And then it changes every 6 months or goes in circles - and I suppose now we just gave up and are letting LLMs YOLO code out the door with whatever works.

Like I remember learning all about Redux Sagas, and then suddenly the whole concept is gone, but also I'm not actually particularly clear on what replaced it (might be time to go back to that well since I need to write a web interface soon again).

adriand 14 hours ago | parent | prev | next [-]

I feel zero sense of sadness about how things used to be. I feel like the change that sucked the most was when software engineering went from something that nerds did because they were passionate about programming, to techbros who were just in it for the money. We lost the idealism of the web a long time ago and the current swamp with apex reptiles like Zuckerberg is what we have now. It became all about the bottom line a long time ago.

The two emotions I personally feel are fear and excitement. Fear that the machines will soon replace me. Excitement about the things I can build now and the opportunities I’m racing towards. I can’t say it’s the most enjoyable experience. The combo is hellish on sleep. But the excitement balances things out a bit.

Maybe I’d feel a sense of sadness if I didn’t feel such urgency to try and ride this tsunami instead of being totally swept away by it.

dinkumthinkum 10 hours ago | parent | next [-]

I see developers talking about this idea of intense and unimaginable excitement about AI. It seems orgasmic for them, like something the hardest drugs couldn't fulfill them. I find it very strange. What exactly is so exciting? I'm not disagreeing but when you say "opportunities I'm racing towards," what does that mean? This idea of "racing towards" sounds so frenetic, I struggle to know what that could mean? What I see people doing with AI is making slop and CRUD apps and maybe some employee replacement systems or something but I don't see this transcendental experience that people are describing. I could see a mortgage collapse or something like that, maybe that's what is so exciting? I don't know.

adriand 3 hours ago | parent | next [-]

> What exactly is so exciting? I'm not disagreeing but when you say "opportunities I'm racing towards," what does that mean? This idea of "racing towards" sounds so frenetic

For me specifically it means two products, one that is something I have been working on for a long time, well before the Claude Code era, and another that is more of a passion project in the music space. Both have been vastly accelerated by these tools. The reason I say “racing” is because I suspect there are competitors in both spaces who are also making great progress because of these tools, so I feel this intense pressure to get to launch day, especially for the first project.

And yes it is very frenetic, and it’s certainly taking a toll on me. I’m self-employed, with a family to support, and I’m deeply worried about where this is all going, which is also fuelling this intense drive.

A few years ago I felt secure in my expertise and confident of my economic future. Not any more. In all honesty, I would happily trade the fear and excitement I feel now for the confidence and contentment I felt then. I certainly slept better. But that’s not the world we live in. I don’t know if my attempts to create a more secure future will work, but at least I will be able to say I tried as hard as I was able.

simonw 9 hours ago | parent | prev | next [-]

Getting a 53% performance boost on a 20+ year old codebase by running a bunch of experiments is pretty exciting to me: https://github.com/Shopify/liquid/pull/2056

discreteevent an hour ago | parent [-]

Developers make these kinds of improvements all the time. Are you saying that it would have been impossible without AI?

cableshaft 5 hours ago | parent | prev | next [-]

Well, I have a backlog of at least 20 graveyard game projects that I stopped working on from one frustration or another over the past 20 years, or getting excited by a new exciting idea and leaving it alone, that I wouldn't mind resurrecting and finally putting some of them out there. Even if not a ton of people play them.

In fact it being easier to get them out there I might care less that they should be marketable and have a chance to make serious money, as opposed to when I was sinking hundreds of hours into them and second guessing what direction I should take the games to make them better all the time.

The art wasn't the problem (the art wasn't great, but I could make functional art at least), it was finding the time and energy and focus to see them through to completion (focus has always been a problem for me, but it's been even worse now that I'm an adult with other responsibilities).

And that hasn't always been the issue, I did release about a dozen games back in the day (although I haven't in quite a few years at this point).

Of course someone may say 'well that's slop then', and yeah, maybe by your standards, sure. These games aren't and never were going to be the next Slay The Spire or Balatro. But people can and do enjoy playing them, and not every game needs to be the next big hit to be worth putting out into the world, just like not every book needs to be the next 1984 or Great Gatsby.

wiseowise 5 hours ago | parent | prev | next [-]

> What exactly is so exciting?

Money, opportunity, status. It is all status games. Think of it as a nuclear war on old order and new players trying to take the niche. Or maybe commies killing whites and taking over Russia?

k32k 9 hours ago | parent | prev [-]

I think those comments are signalling something much deeper about the individual.

kaffekaka 7 hours ago | parent [-]

Signalling what? Please expand.

antod 8 hours ago | parent | prev [-]

I think the rise of Facebook was possibly my first sense that our victory for "open" on the web was going to be short lived. Eg our (well not mine, I never used it) comms were moving to proprietary platforms.

Then with AWS our infra was moving to proprietary platforms. Now our dev tools are moving to expensive proprietary platforms.

Combined with widespread enshittification, we've handed nearly everything to the tech bros now.

qsort 14 hours ago | parent | prev | next [-]

I think the argument is "a bit too nice," it isn't a binary, motivations are complicated and sometimes both feelings coexist.

If I reflect for a moment about why I personally got into tech, I can find at least a few different reasons:

- because I like solving problems. It's sad that the specific types of problems I used to do are gone, but it's exciting that there are new ones.

- because I like using my skills to help other people. It's sad that one specific way I could do that is now less effective, but it's exciting that I can use my knowledge in new ways.

- because I like doing something where I can personally make a difference. Again, it cuts both ways.

I'm sure most people would cite similar examples.

ehnto 13 hours ago | parent | prev | next [-]

It's not a pure dichotomy though. I have always been both, and slowly mixing in agentic coding for work has left me some new headspace to do "trad" programming on side projects at home.

I love the exciting ideation phase, I love putting together the puzzle that makes the product work, and I also take pride in the craft.

kaffekaka 7 hours ago | parent [-]

I agree with this. Using agents at work has increased the possibility of me having energy left to code by hand at home. So much coding at work is not fulfilling, it is boilerplate and I do not learn anything from writing the Xth variation of the same thing.

Yes, those things should have been automated long ago, but they weren't, and now with coding agents much of them are.

forgetfreeman 3 hours ago | parent | prev | next [-]

"The craft-lovers and the make-it-go people sat next to each other, shipped the same products, looked indistinguishable."

Definitely not. Based on my observations from a career as an open source and agency developer it was obvious at a glance which of these camps any given developer lived in by their code quality. With few exceptions make-it-go types tended to produce brittle, hacky, short-sighted work that had a tendency to produce as many or more problems than it solved, and on more than one occasion I've seen developers of this stripe lose commit access to FOSS projects as a result of the low quality of their contributions.

"nobody got into it because programming in Perl was somehow aesthetically delightful."

Compared to trying to get stuff accomplished in C Perl was an absolute dream to work with and many devs I knew gravitated to web development specifically for their love of the added flexibility and expressiveness that Perl gave them compared to other commonly used languages at the time. Fortunately for us all language design and tooling progressed.

thedevilslawyer an hour ago | parent [-]

Generalize much? How would you feel if code-as-craft people were called out to be anti-social nerds who spent times on umpteenth rewrite and refactor, didn't care what impact that had on the actual user they were building for?

camgunz 23 minutes ago | parent | prev | next [-]

I think SWEs are genuinely pretty shocked and awed that codegen models can code at all, let alone code well. My guess is a lot of the agita around this is that people thought "I can code therefore I'm smart/special/etc." and then a machine comes by that can do pretty equivalent work and they're entirely unmoored. I sympathize with that, and I don't mean to dismiss it, but that's not what I feel. I really dislike this "doer vs. maker" binary stuff that comes up every now and again, as though everyone who thinks codegen models aren't perfect doesn't want to make anything. I really want to make things--good things--and I dislike the current hype wave behind codegen models because they often make it harder for me to make good things.

I've used Claude Code to build a few big things at work; I ask it questions ("where does this happen", "we have problem X, give me 3 potential causes", etc); I have it review things before I post PRs; our code review bot finds real heisenbugs. I have mixed success with all of this, but even so I find it overall useful. I'd be irritated if some place I worked, current or present, told me I couldn't use Claude Code or the like.

That said, I've not gotten it to be useful in:

- building entire, complex features in brownfield projects

- solving systemic bugs

- system design/evolution

- feature/product design and planning

- replacing senior engineer code review

It will confidently tell you it's done these things, but when you actually force yourself through the mental slog of reviewing its output, you'll realize it's failed (you also have to be an expert to perform this analysis). Now, maybe it fails in an acceptable way; maybe only slight revision is required; maybe it one-shots the change and verifying success isn't a big mental slog. Those are the good cases. More annoying are the times it fails totally and obviously, but the real nightmares are when it fails totally, yet imperceptibly. It also sometimes can do (some of) these things! But it's inconsistent, such that its successes largely serve to lower your guard against its failures.

And the mental slog is real. The artifacts you have to produce/review/ensure the model adheres to are ponderous. The code generated is ponderous. Code review is even more tedious because there's no human mind behind the code, so you can't build a mental model of the author. Getting a codegen model to revise its work or take a different approach is very hit or miss. Revising the code yourself requires reading thousands and thousands of lines of generated code--again with no human behind it--and building a mental model of what's happening before you can effectively work, and that process is time-consuming and exhausting.

I'm also concerned about the second-order effects. Because switching into the often-required deep mental focus is very difficult (borderline painful), I've seen many, many people reach for LLMs in those moments instead, first a little, then entirely. I've watched people copy/paste API docs into Gemini prompts to explain them. I've watched people unable to find syntax errors in code and paste it into ChatGPT to fix it. I'm confident I'm not the only person who's observed this, and it's a little maddening it's not getting more play.

---

I'm not saying SWEs don't fail in similar ways. I've approved--and authored--human PRs that had insidious flaws with real consequences. I've been asked to "review" PRs pre-ChatGPT that were 10x the size they needed to be. I've seen people plagiarize code, or just copy/paste Stack Overflow constantly. The difference is we build process around these risks, everything from coding patterns, PR size limits, type systems, firing people, borderline ludicrous amounts of unit tests, CI/CD, design docs, staging environments, red/green deploys, QA lists, etc.

I hate all of it! It's a constant reminder of my flaws and it slows down mean time to dopamine squirt of released code. I'd be the first person to give all this shit the axe. I would love to point Claude at the crushingly long list of PRs I have to review. But I can't, because it still has huge, huge flaws. Code review bots miss obvious problems, and they don't have enough context/knowledge about the system/bug/feature to perform a sufficiently comprehensive review. It would be a net time waste because we'd then have to fix a bug in prod or revise an already-deployed feature/fix--things I like even less than code review, if you can believe it.

These models cannot adequately replace humans in other parts of the SDLC. But, because pesky things like design and code review cap codegen models' velocity, our industry is "rethinking" it all, with no consideration of the models' flaws; "rethinking" here meaning "we're considering having an LLM handle all our code review, or not doing it at all". The only way to describe that is reckless disregard. It's unprofessional and unethical.

So, I think my grief isn't about "the craft". I don't think that's gone and I don't think I'd care if it were. My grief is about the humiliation of our profession, the annihilation of our standards and the betrayal of any representation we made to our users--indeed to ourselves. We deserve software systems that do what they say they do, and up until recently I really thought we were working hard to get there. I don't think that anymore; like many other things in our era (community, truth, curiosity, generosity, trust, learning, rationality, practice, compassion) it has retreated in the face of some flavor of self-interested, shallow grift. I really don't know how or why this happened, but regardless of the cause we truly are in a dark time.

magicalist 14 hours ago | parent | prev | next [-]

Eh, it also feels like a classic "maybe we somehow have enough perspective on this watershed moment while it's happening to explain it with a simplistic dichotomy". Even this piece interrogates the feeling of "loss" and teases out multiple aspects to it, but settles on a tl;dr of "yep, dichotomy". There's more axes here too, where that feeling can depend on what you're building, who you're building it with, time and position in your career, etc etc.

(I'll admit, though, that this also smells to me a bit too much like introvert/extrovert, or INTP/INTJ/etc so maybe I'm being reflexively rejective)

lmorchard 12 hours ago | parent | prev | next [-]

> The web was objectively awful as a technology

I, for one, remember when I could crash Netscape Navigator by using CSS too hard (i.e. at all) or trying to make a thing move 10px with DHTML. But I kept trying to make browser to thing.

sublinear 14 hours ago | parent | prev | next [-]

The divide was never invisible and there has always been at least three camps.

The "make-it-go" people couldn't make anything go back then either. They build ridiculous unmaintainable code with or without AI. Since they are cowboys that don't know what they're doing, they play the blame game and kiss a ton of ass.

The "craft-lovers" got in the way just as much with their endless yak shaving. They now embrace AI because they were just using "craft" as an excuse for why they didn't know what they were doing. They might be slightly more productive now only because they can argue with themselves instead of the rest of the team.

The more experienced and pragmatic people have always been forced to pick up the slack. If they have a say, they will keep scope narrow for the other two groups so they don't cause much damage. Their use of AI is largely limited to google searches like it always was.

saulapremium 7 hours ago | parent [-]

Let me guess: you happen to be one of these lone pragmatists in the sea of incompetent ass-kissers and yak-shavers who use AI for writing code?

hungryhobbit 14 hours ago | parent | prev [-]

I strongly disagree. There's always been two camps ... on everything!

Emacs vs. vi. Command-line editor vs. IDE. IntelliJ vs. VS Code. I could do like twenty more of these: dev teams have always split on technology choices.

But, all of those were rational separations. Emacs and vi, IntelliJ and VS Code ... they're all viable options, so they boil down to subjective preference. By definition, anything subjective will vary between different humans.

What makes AI different to me is the fear. Nobody decided not to use emacs because they were afraid it was going to take their job ... but a huge portion of the anti-AI crowd is motivated by irrational fear, related to that concern.

ofrzeta 8 hours ago | parent | next [-]

For the sake of argument let's assume we have a common goal: produce a software product that does its job and is maintainable (emphasis on the latter).

Now given that LLMs are known to not produce 100% correct code you should review every single line. Now the production rate of LLMs is so high that it becomes very hard to really read and understand every line of the output. While at the same time you are gradually losing the ability to understand everything because you stopped actively coding. And at the same time there are others in your team who aren't that diligent adding more to the crufty code base.

What is this if not a recipe for disaster?

antihipocrat 7 hours ago | parent [-]

I think differences in the business determine whether the maintenance/understanding aspect is important. If developing an MVP for a pitch or testing markets then any negatives aren't much of a consideration.. if working in a mature competitive or highly regulated domain then yeah, it's important

monknomo 13 hours ago | parent | prev | next [-]

what about the fear is irrational?

yoyohello13 13 hours ago | parent | prev | next [-]

It doesn’t help that the CEOs of these companies are hyping up the fear. It’s no wonder people are afraid when the people making the products are spouting prophecies of doom.

g-b-r 13 hours ago | parent | prev [-]

A huge portion of the pro-AI crowd is motivated by irrational hype and delusion.

LLMs are not a tool like an editor or an IDE, they make up code in an unpredictable way; I can't see how anyone who enjoyed software development could like that.

cableshaft 5 hours ago | parent | next [-]

Pretty much anyone who's not you, will make code in an unpredictable way. I review other people's code and I go 'really, you decided to do it that way?' quite often, especially with coders with less years of experience than me.

That's kind of how this is starting to feel to me, like I'm turning more into a project manager that has to review code from a team of juniors, when it comes to A.I. Although those juniors are now starting to show that they're more capable than even I am, especially when it comes to speed.

bandrami 5 hours ago | parent | prev [-]

Certainly those of us who maintain and administer it don't like that

coffeefirst 18 minutes ago | parent | prev | next [-]

I find this incredibly condescending. It’s not about grief.

Instead I see a lot of people using the same tools, seeing the same results, and having wildly different reactions. I think I can attribute some of this to two factors.

First, Claude Code feels like playing a video game. You go super fast, you may find it addictive.

The second factor can be found by reading this thread: https://news.ycombinator.com/item?id=47357042

sarchertech 13 hours ago | parent | prev | next [-]

I’ve heard this thesis a lot, but it’s almost always from the result chasers.

It doesn’t resonate with me because I am a result chaser. I like woodworking because I like building something that never existed before. I don’t mind using a CNC router or a 3 printer to help me out. I don’t care about the process, I care about the result. But I care deeply about the quality of the result.

I don’t care about the beauty of the code, but I do care that nearly every app I load takes longer than it did 15 years ago. I do care that my HomePod tells my wife it’s having trouble connecting to iPhone every 5th time she adds something to the grocery list. I care that my brokerage website is so broken that I actually had to call tech support who told me that they know it’s broken and you have to add a parameter to go back to the old version to get it to work.

I care that when I use the Claude desktop app it sometimes gives me a pop up with buttons that I can’t click on.

I’ve used Claude and Cursor enough to have what I think are valid opinions on AI assisted coding. Coding is not the bottleneck to produce a qualify product. Understanding the problem is the biggest bottleneck. Knowing what to build and what not to build. The next big one is convincing everyone around you of that (sometime this takes even more time). After that, it’s obsessively spending time iterating on something until it’s flawless. Sometimes that’s tweaking an easing value until the animation feels just right. Sometimes that’s obsessing over performance, and sometimes it’s freezing progress until you can make the existing app bulletproof.

AI doesn’t help me with these. At least not much. Mostly because the time I spend coding is time I spend understanding, diagnosing, and perfecting. Not the code. The product.

It does help crank out one off tools. It does help me work in unfamiliar code bases, or code bases where for whatever reason I care more about velocity than quality. It helps me with search. It helps me rubber duck.

All of those things does boost my productivity I think, but maybe somewhere in the order of 10% all in.

epolanski 13 hours ago | parent | next [-]

> AI doesn’t help me with these. At least not much. Mostly because the time I spend coding is time I spend understanding, diagnosing, and perfecting. Not the code. The product.

It can actually help a lot here too.

In fact I rarely have AI author or edit code, but I have it all time researching, finding edge cases I didn't think about, digging into dependencies code, finding ideas or alternative approaches. My usage is 90% of the time assisting with information gathering, criticizing (I have multiple reviewer skills with different personas, and I have multiple LLMs run them), refining, reviewing.

Even when it comes to product stuff, many of my clients have complicated business logic. Talking multi-tenant-company warehouse software where each process is drastically different and complexity balloons fast even for a single one of them. It helps to connect the dots between different sources of information (old Jira task, discord dumps, confluence, codebase, etc).

And it can iteratively test and find edge cases in applications too, same as you would do manually by taking control of the browser and testing the most uncommon paths.

I would do much less without this assistance.

I really don't get why people focus so much on the least empowering part (code), where it actually tends to balloon complexity quick or overwhelm you with so much content and edits that you can't have the energy to follow while maintaining quality.

sarchertech 13 hours ago | parent [-]

Yeah I’ve used it for that and it use useful. But it’s kind of like listening to a math audiobook vs working out math problems. I still need to work out the math problems to really understand what’s going on.

I’m also nervous about the inevitable cognitive decline of relying on AI to explain everything to me.

epolanski 13 hours ago | parent [-]

> But it’s kind of like listening to a math audiobook vs working out math problems.

I don't see it that way.

I'm solving the math problem, but after coming with a solution I start asking for alternative approaches or formulas I don't even know about.

In fact, calculus is full of suck gotchas tricks or formulas, think of integrals or limits. Took who found those decades/centuries to find them.

It's not a black/white divide.

sarchertech 13 hours ago | parent [-]

You added several paragraphs after I responded, but I never said it was black and white. I said it wasn’t sufficient. By which I mean you still need to read the code, and that making changes to the code is even more beneficial for understanding.

kypro 13 hours ago | parent | prev | next [-]

> Coding is not the bottleneck to produce a qualify product.

I've been saying this too. The 10x engineer stuff simply cannot make sense unless previously you were spending 90%+ of your day just writing coding and now that's dropped to single digits because AI can generate it. If you spent 20-30% of your day coding before and the rest thinking about the problem, thinking about good UX, etc, then AI coding assistances mathematically cannot make you a 10x engineer. At a push they might make you a 2x engineer.

Given this I think I realised something earlier about my own output... I'm probably just a unusually good coder. I've been doing this since I was a kid so writing and reading code is basically second nature to me. When I was a young teen I would literally take my laptop on holiday with me just so I could write code - I was just that kind of person.

So I've basically always been the strongest or one of the strongest coders on any team I've been on. I very rarely have to think about how to do something in code. It's hard to think back to a time when code was a significant bottleneck for me.

However, my output was never really faster than anyone else when it come to shipping, but the quality of my output has always been wayyy higher. And I think that was because I always spent a lot more time thinking and iterating to get the best result, while other people I work with spent far more time writing code and just trying to get something they could PR.

My problem now is that the people I work, some of whom can't even read code, are able to spit out thousands of lines of code a day. So this forcing me to cut corners just to keep up with the rest of the team.

6-12 months ago I'd get at least 2-3 calls a day from people on my team asking for help to write some code. Now they just ask the AI. I haven't had someone ask me a coding related question in months at this point.

I find this frustrating to be honest. I'm seeing bad decisions everywhere in the code. For example, often a change is hard because it's a bad idea. Perhaps a page on a website doesn't really look great on mobile or desktop. Previously you would have had to think about how you could come up with a good responsive design and implement the right breakpoints. But now people can just ask Claude Code to build a completely different page for mobile, so they do. For a human that would be a huge effort, even if someone who stupid enough to think that was a good idea they probably be forced to do something thats easier to maintain and implement, but an AI? Who cares. It works. The AI isn't going to tell you no.

I know the quality of code is dropping. I see the random bugs from people clearly not understanding what Claude is writing, but if they can just ask the AI to fix it, does it even matter?

> All of those things does boost my productivity I think, but maybe somewhere in the order of 10% all in.

I'm very much like you. AI doesn't really boost my productivity at all but that's because I care about what I build and don't find coding hard. So AI doesn't really offer me anything. All it's doing is making people who don't care what their building and don't care about the quality of their code more productive. And putting me under pressure to trade quality for velocity.

sarchertech 2 hours ago | parent [-]

I know someone who has a friend that works at Anthropic. He says that it’s essentially 2 companies. 1 that vibe codes everything and merges without understanding, and one that spends all their time putting out the fires created by the first company.

I think we’re destined to be #2 for a while. If it gets too bad, my plan is to move into a part of the industry where quality and reliability are non-negotiable. Or start my own company and compete against established players for the smaller customer base that’s willing to pay for quality.

I go out of my way to pay for quality projects even if (and often because) they have fewer features. I think there are probably enough of us to support a lifestyle business in many niches.

I also suspect as vibe coding introduces more bugs (we’ve certainly seen this at my current company) the people willing to pay for alternatives will grow.

ares623 8 hours ago | parent | prev | next [-]

> I like woodworking because I like building something that never existed before. I don’t mind using a CNC router or a 3 printer to help me out. I don’t care about the process, I care about the result. But I care deeply about the quality of the result.

Why not outsource it to someone else? That way you do none of the work.

sarchertech 2 hours ago | parent [-]

Because delegating doesn’t give me the level of control I want, the kind of people with the quality standards and capabilities I have are extremely expensive, and I like creating things.

If I had access to a factory of apprentices that I had total control of, I probably would outsource more of it. So on the surface it seems like I’d love AI, but these particular apprentices aren’t up to my standards, there are severe limitations on how much I can train them, and I get no joy from teaching them.

sublinear 13 hours ago | parent | prev [-]

You seem to be conflating code quality with product integrity.

All those problems are caused by business decisions, not the developers. You do make a good point though that AI may enable more people to build their own when they can.

sarchertech 13 hours ago | parent [-]

The term code quality is overloaded and not really worth discussing without defining exactly what we mean.

But yes many of those problems were caused by business decisions. But engineers are perfectly cable of creating those problems on their own. If an engineer doesn’t realize that the function they called buffers messages in memory because someone made a wrapper function around sendAsync() and called it send(), that’s a code quality issue not a business issue (except as in the broader sense where every problem is ultimately a business issue).

Or if an engineer writes a naive implementation of some algorithm and adds a spinner so that an operation takes 5s to finish when it could be instantaneous if they’d thought about the problem more.

sublinear 11 hours ago | parent [-]

I've heard this opinion a lot before, but in my experience there's a lot more dysfunction behind the scenes when stuff like that happens.

It's the same in other industries too. Someone designs and implements something properly and then it gets into the hands of product people who want to rip half of it out. The business then wants some much cheaper contractors to quickly make those changes without the original engineers involved. The result is a mess.

sarchertech 2 hours ago | parent [-]

I’m not going to disagree with you there. A bad engineering culture is usually ultimately caused by other factors. Many times that is just hiring bad engineers though. Such that even if business got out of the way, the engineers still wouldn’t make a quality product.

andai 12 minutes ago | parent | prev | next [-]

You can just turn the AI off. I think that's a good idea to do regularly, in the same way it's good to have some time every day without screens and internet in your life.

I did some "trad coding" to see how much I'd atrophied, and I was startled at how difficult and unpleasant it was. I was just stuck and frustrated almost the whole time! But I persisted for 7 hours and was able to solve the problem.

Then I remembered, actually it was always like that! At least when doing something unfamiliar. That's just what programming feels like, but I had stopped being used to it because of the instant gratification of the magic "just fix my problem now" button.

In reality had spent 7 hours in "learning mode", where the whole point is that you don't understand yet. (I was moving almost the whole time, but each new situation was also unfamiliar!)

But if I had used AI, it would have eliminated the struggle, and given me the superficial feeling of understanding, like when you skim a textbook and think "yeah I know this part" because you recognize the page. But can you produce it? That's the only question that matters.

I think that's going to become a very important question going forward. Sure, you don't need to produce it right now. But it's mostly not for right now.

Just like you don't "need" to run and lift weights. But what happens if you stop?

mrob 3 hours ago | parent | prev | next [-]

The optimal amount of generative AI in the world is zero. There are three possible scenarios, all of them bad:

1. Weaker than expected AI.

Great Depression 2.0. Widespread poverty and suffering as the enormous investments already made fail to pay off.

2. AI works as expected.

Dystopia. A few trillionaires gain absolute control of the entire world, and everyone else is enslaved or killed.

3. Stronger than excepted AI.

Hard take-off singularity scenario. Extinction of all biological life.

It's probably hopeless to resist at this point, but we should at least try.

sathish316 a minute ago | parent | next [-]

You do realize that AI seems magical because text response is converted into actions or tool calls. The AI is deciding the order in which the tools get called to fulfill your prompts. True Intelligence of Type 2 and 3 above needs to formulate, plan, analyse tradeoffs, think critically and solve novel unforeseen problems.

negroesrnegro 2 hours ago | parent | prev [-]

you may notice tbat the same people benefit from 1 and 2 and 3 is just a fantasy sold to the plebs to detract attention from 1 and 2

what a councidence

JetSetIlly 24 minutes ago | parent | prev | next [-]

I think the split is between people who are in a hurry and those who are not. I'm not in a hurry and so choose not to spend money to get a quicker result.

Taking time to solve a problem myself is pleasurable and I make no apologies for that.

Horses for courses.

ontouchstart 2 hours ago | parent | prev | next [-]

The consumer/producer dichotomy misses another aspect of coding, with or without AI.

About a decade ago, STEM education was trendy and everyone was getting Lego, Raspberry Pi etc to build robots and writing Python in the name of STEM. You can ask LLM what STEM standards for.

The Maker movement is not about consuming or producing for consumption. Some people might get incentive to be influencers and profit off it. But the majority of the kids who went through this process become adults and moved on to be producer/consumer and playing with AI now. I believe their curiosity and creativity.

Don’t worry, life will find its way.

amelius 4 hours ago | parent | prev | next [-]

There's two kinds of developers. The first one would never become a manager because they like coding too much. The other one would become a manager at the first opportunity. It is obviously the second group that is benefiting from AI the most (because not everybody can be a manager).

vb7132 3 hours ago | parent [-]

True, there are people who are good with people. And they should totally become managers.

But there are also the third kind: who like to design the systems and let them be built by someone else..

PaulHoule 14 hours ago | parent | prev | next [-]

You can use gen AI entirely in the spirit of craft. For instance if you need to consume, implement or extend some open source software you can load it up in an agent IDE and ask “How do I?” questions or “how is it that?” questions that put you on a firm footing.

danjl 13 hours ago | parent | next [-]

> I was afraid the puzzle-solving was over. But it wasn't—it just moved up a level.

The craft can move up a level too. You still can make decisions about the implementation, which algorithms to use, how to combine them, how and what to test -- essentially crafting the system at a higher level. In a similar sense, we lost the hand-crafting of assembly code as compilers took over, and now we're losing the crafting of classes and algorithms to some extent, but we still craft the system -- what and how it does its thing, and most importantly, why.

gassi 14 hours ago | parent | prev | next [-]

And contribute your changes back upstream, right?

autoexec 13 hours ago | parent [-]

Do we even want a bunch of people contributing slop upstream when (assuming it does anything worthwhile in the first place) somebody has to actually review/correct/document that code?

A handful of well intentioned slop piles might be manageable, but AI enables spewing garbage at an unprecedented scale. When there's a limited amount of resources to expend on discussing, reviewing, fixing, and then finally accepting contributions a ton of AI generated contributions from random people could bring development to a halt.

bluefirebrand 8 hours ago | parent | prev [-]

You don't need AI for this, we've had search engines and good online resources for decades

simianwords 6 hours ago | parent | next [-]

Being blunt here but this is a good example of dogmatic thought.

AI is leaps and bounds better than google at searching.

“You don’t need google for this, we have had public libraries for decades” energy.

wreath 30 minutes ago | parent | next [-]

Yeah to get the definitive answers, sure AI is quicker. Google is more like the librarian pointing you at possibly good resources to get your answers from after reading the materials and there are a lot of good learning opportunities there. LLMs just give you the answer and robs you of those opportunities.

wolvesechoes 27 minutes ago | parent | prev [-]

> dogmatic thought.

Dogmatism sometimes seems like a better thing compared to mind so open that wind blows through it without obstacles.

reverius42 7 hours ago | parent | prev | next [-]

Current AI is much, much better than current search engines (which themselves seem worse than they were decades ago, for some reason).

bigstrat2003 6 hours ago | parent [-]

It really isn't. AI has nothing on a good search engine like Kagi.

simianwords 6 hours ago | parent [-]

This is easily disproven. I mean how can someone still believe this? Wow!

I can come up with many examples that would take you ages to search in Kagi vs one prompt in ChatGPT.

You really should be updating.

wiseowise 5 hours ago | parent | prev [-]

False. Have you even used google in the last 6 years or so? The results are so bad that I stopped using it altogether. It pops up sometimes when I mistype something in a search bar, but that’s it.

And don’t make me laugh about “good online resources”. SO went downhill and is just a graveyard at this point where everything is frozen in time. It has some good discussions (that LLMs ingested), but that’s it.

You can hate LLMs all you want, but they’re godsend for interactive discussion with the material.

FabianCarbonara 5 hours ago | parent | prev | next [-]

For me AI unlocked building things I just couldn't before. My creativity and ingenuity now have an outlet that wasn't possible without agentic coding tools. That's genuinely exciting. But I also keep wondering: how long until the level of abstraction I'm working at now gets automated too?

tcgv 2 hours ago | parent | prev | next [-]

That's an interesting take. I'm likely on the same side of the split as you, since I'm very much motivated by the new possibilities agentic coding tools open when used responsibly.

Back in February, I also wrote a piece on the recurring mourning/sense of grief we are seeing for 'craftsmanship' coding:

- https://thomasvilhena.com/2026/02/craftsmanship-coding-five-...

tyleo 2 hours ago | parent | prev | next [-]

I think there is a split but I don’t think it’s between people who love hand-crafting things vs not.

I love hand crafting things, yet I’m waking up like a kid on Christmas every day, running to my computer to use Claude code. For my critical apps I review every line. For 1-off things, I’ve had Claude build single-serving applications.

If I had to guess the split is more between folks who have curiosity about the new technology and folks who fear things changing. With a decent center on that Venn Diagram of folks who feel both.

skeledrew 13 hours ago | parent | prev | next [-]

> These are real feelings about real losses. I'm not here to argue otherwise.

I'll argue it. Technically, there's no loss IMO, only gain. Craft lovers can still lovingly craft away, even if they have to do it on their own time instead of on their now-AI-dominated day job, just like in ye olde days. Nothing's stopping them.

But now result chasers can get results faster in their chasing. Or get results at all. I'm a proud result chaser now making serious progress on various projects that I've had on ice anywhere from months to years and occasionally lamented not having time/energy for them. And I also note my stable of tools, for both AI-related dev and other things, has grown greatly in a short period of time. And I'm loving it.

rimunroe 11 hours ago | parent [-]

> Craft lovers can still lovingly craft away, even if they have to do it on their own time instead of on their now-AI-dominated day job, just like in ye olde days. Nothing's stopping them.

…except time, which sadly is limited. I’m sad about the real potential that I might not get to be paid to do something I enjoy so much anymore. I care about end products for sure, but that’s not why I’m in this career.

I do this because a large part of the work engages me in a pleasant way. I like TDDing in a tight loop. I like how it forces me to think one step at a time, how I get to stop myself from jumping ahead, and how I get to verify my thoughts or theories within seconds. I find efficiently manipulating text in my editor satisfying. I love the feeling of being validated that my architectural choice was right when a spec changes and the required code change is obvious, minimal, and clearly expressed. I enjoy the feeling of obtaining mastery for mastery’s sake rather than because it lets me create a product.

I’ve felt incredibly lucky for over a decade that my work gave me the opportunity to chase that. I may find enjoyment in wrangling AI, but I’m skeptical it’ll scratch that itch. If it doesn’t and I wanted to still scratch it, I’d have to do it on my own time. That would mean sacrificing time I’ve previously spent on other interests, and I don’t have a ton of time to begin with.

skeledrew 7 hours ago | parent [-]

> sacrificing time I’ve previously spent on other interests

I'd say this is the crux of the matter. Having competing interests and choosing what to do and how much of it is a balancing act, but you can still get that desired satisfaction. You could perhaps even start your own company if it's that important to you.

For me, projects just keep accumulating regardless of how much time I dedicate to them (outside of the mandatory things). Maybe I just have too many things I'd like to build. Definitely thinking about starting a company myself now there's all this capability available.

vb7132 3 hours ago | parent | prev | next [-]

Having managed developers for over five years, I have seen two categories of devs (to simplify the argument, let's focus just on the smart ones):

- one group loves to work independently and gets you the results, they are fast and they figure things out

- second group needs direction, they can be creative in their space but check-ins and course corrections are needed.

AI feels like group1 but it's actually group2. In essence, it doesn't fully fit in either group. I am still figuring out this third group.

api 14 hours ago | parent | prev | next [-]

I'm a bit in the middle. I enjoy the craft but I also seek and enjoy the result.

The thing about AI is that you don't have to use it for everything. Like any other tool you can use it as much as you'd like. Even though I like the craft, I find myself really enjoying the use of AI to do things like boilerplate code and simple tests. I hate crafting verbose grunt work, so I have AI do that. This in turn leaves me more time to do the interesting work.

I also enjoy using AI to audit, look for bugs, brainstorm, and iterate on ideas. When an idea is solid and fleshed out I'll craft the hard and interesting parts and AI-generate the boring parts.

woodenbrain 2 hours ago | parent | prev | next [-]

This resonates with me. I got into development exactly because I wanted to make things useful to me, with limited background in programming, over 20 years ago. Tech eventually got in the way. I was so bored with sandboxes, entitlements, signing apps, etc. The joy was gone. Now I am developing a new app with AI help. I may not be using the tools optimally but I don't care, it's a process. And it's a lie that this is a fast process. I have been working on one app for months, and now I have a pretty solid new app to show for it. Looking for MacOS Apple Music users for beta testers, BTW. Please have a look. https://www.woodenbrain.com/grooves.html

jacquesm 14 hours ago | parent | prev | next [-]

There are far more divides than just that one.

For instance, the ones that look at it from an economics perspective, security perspective, long term maintainability perspective and so on. For each of these there are pros and cons.

ares623 14 hours ago | parent [-]

all this so people like us can do a job that wasn't that hard to begin with and was actually very comfortable all things considered, just a tiny bit easier in a way that isn't even measurable.

randlet 14 hours ago | parent [-]

> a job that wasn't that hard to begin with

The more experience I get the harder the job seems tbh

Avicebron 14 hours ago | parent [-]

Have you gotten to the part where you barely even get to write code anymore and just manage people's expectations full time yet?

mekael 14 hours ago | parent [-]

Ah, management without managing. Its depressing and engaging at the same time. Depressing because palace intrigue is exhausting and fraught with peril. Engaging because I love explaining things to people and watching everything click into place for them (see the 1 of 10k xkcd comic).

kalalakaka 14 hours ago | parent | prev | next [-]

After years of working at startups I’ve long since abandoned any notion of craft at work. I have developed a very keen sense for harmfully cutting corners though, and unreviewed AI code (or unreasonably large PRs - defined by a size you can’t comfortably review) is absolutely cutting corners. It’s nothing to do with craft and everything to do with both correctness and incurring massive amounts of future debt.

sesm 14 hours ago | parent | next [-]

Yep it's not 'result chasers' but people who want to get credit while avoiding real work. And when their stuff breaks they are always too busy with something else or moved on to another project.

dang 8 hours ago | parent | prev | next [-]

Could you please not create an account for every few comments you post? This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

You needn't use your real name, of course, but for HN to be a community, users need some identity for other users to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...

skeeter2020 13 hours ago | parent | prev [-]

This better matches my experiences and feelings than the divide which the author discusses. The craft is in the entire building, not specifically the coding aspects. I want to do a great job building a house, and if AI helps or even completes some aspects while meeting standards that's awesome. The problem is it hasn't yet shown it can be trusted as a sub trade, and we've got people outsourcing the entire project to an army of agents. The result looks a lot like the condo I bought a few years ago.

koenschipper 6 hours ago | parent | prev | next [-]

I think I'm in the middle, at first I was definitely against using any AI because I loved the craft. But over the past 12-18 months I've been using it more and more.

I still love to code just by hand for an fun afternoon. But in the long-term, I think you are going to be left behind if you refuse to use AI at all.

vaylian 6 hours ago | parent [-]

"Left behind" in terms of speed or are there other aspects that people are missing out on?

cableshaft 5 hours ago | parent | next [-]

Speed has a lot to do with it, yeah.

A.I. is now often doing in 5-10 minutes what would take me hours on my own for any given task (well based on the last couple of weeks at least, I wasn't doing much agent based A.I. coding before that).

I was pretty much having a real-time conversation with my superiors, showing them updates just a couple of minutes after they suggested them, for a feature the other day, getting feedback on how something should look.

Something that would have taken me an hour or more each time they wanted a change or something new added.

Now that cuts both ways, as it started to seem like they were expecting that to be the new normal and I started to feel like I had to keep doing that or make it seem like I'm not actually working. And it gets exhausting keeping up that pace, and I started worrying when anything did take me extra time.

wiseowise 5 hours ago | parent [-]

> I was pretty much having a real-time conversation with my superiors, showing them updates just a couple of minutes after they suggested them, for a feature the other day, getting feedback on how something should look.

Seems like a nightmare.

cableshaft 4 hours ago | parent [-]

I did choose to do it, so it wasn't a nightmare. I was wanting extra guidance on what to include, and so I asked them (while they both weren't busy), they gave me feedback, I did that in like a minute or two with A.I. (when normally it would have taken me a lot longer), so I showed them and was like 'how's this?' and they said 'could you change it to be like this?' etc back and forth for about 45 minutes. It was about the equivalent of a call except it was over Slack and I could provide something they could look at quickly.

What could be considered to be a nightmare, perhaps, is suddenly feeling like 'uh oh, is this going to be the new normal. Will I have to keep doing this all the time now, or else they think I'm not getting any work done?'

ares623 6 hours ago | parent | prev [-]

It's this season's "have fun staying poor"

comrade1234 14 hours ago | parent | prev | next [-]

I'm a craft lover but I like using the Ai for tedious tasks. Just today it tracked down a library conflict in a pom that from experience would have taken a day of trial and error.

daft_pink 11 hours ago | parent | prev | next [-]

It kinda reminds me of the first time I visited a maker space years ago. It was full of cutting edge lasers cutters, 3-D printers, oscilloscopes. I’m doing the tour they told us we could make anything. Then the tour ended. I get to meet the real users and they showed me what they made. Most people just made random things like etching Pokémon into their pencil case. I left thinking wow these people could make anything and that’s what they made. All I’m saying is if you give the average person something that Lockheed Martin is using to build the SR 71. The average person would probably just use it to make a toy car and that SR 71 is not going to get built.

bluefirebrand 8 hours ago | parent [-]

The average person can't afford the materials to build an SR 71 but they can afford a pencil case.

ernesto905 13 hours ago | parent | prev | next [-]

> Before AI, both camps were doing the same thing every day. Writing code by hand. Using the same editors, the same languages

Throughout college I would see a pretty stark divide, where most people would use vscode on mac or on Windows + WSL. But there was a small minority who would spend alot of time 'tinkering' (e.g, experiment with OS like nix/gentoo, or tweaking their dev environment). Maybe i'm misunderstanding what a 'craft lover' means here but it seemed to me, at the time, that the latter camp had more technical depth just based on conversation. Can't speak to the result in terms of test scores. Though it would be interesting to see any data on that if it exists.

Ericson2314 14 hours ago | parent | prev | next [-]

> Before AI, both camps were doing the same thing every day. Writing code by hand. Using the same editors, the same languages,

Hell no. I, a craftsman, was going out of my way to use things like Haskell. I was very aware of the divide the entire time. The present is a relief.

HoldOnAMinute 13 hours ago | parent | prev | next [-]

I am enjoying crafting really good requirements documents. I use an iterative process. The implementation is the test of the requirements document. If it's not right, I adjust the doc, discard that implementation, and try again.

suzzer99 13 hours ago | parent [-]

> I am enjoying crafting really good requirements documents.

Can you please come train the product people at my job? Maybe some of your love of the game will rub off on them.

hofo 11 hours ago | parent | prev | next [-]

There’s a similar divide in the woodworking community between people that use CNC and the like to mill and shape wood vs those that use hand and power tools.

kaffekaka 6 hours ago | parent [-]

Isn't the question really about what you can get paid for, have as your job?

People want to keep doing what they enjoy as their fulltime profession, making a (good) living out of it. Perfectly understandable. But neither AI, powertools or image generation is preventing someone from doing craft coding, hand woodworking or drawing/painting in their spare time and enjoying it.

I love drawing but since it went out of business as a viable career long before I was born I have never felt the grief of losing it. With craft coding we happen to exist at the point in time where it suddenly gets significantly de-crafted.

jigglypuff-mab 5 hours ago | parent | prev | next [-]

This resonates. I've noticed my own relationship with coding shifting in ways I didn't expect.

The grief isn't really about losing the craft—it's about losing the context where that craft made sense. When I started, "good code" meant something specific: elegant abstractions, clever patterns, the kind of stuff you'd show off in a code review. Now? The best code might be the prompt that gets an agent to write 500 lines of solid boilerplate in 30 seconds.

What's weird is I'm not even sad about it. I'm more... untethered? Like the identity I built around "being a good programmer" is dissolving, and underneath there's just... someone who likes making things work.

Maybe that's the real split: people who tied their identity to how they worked vs. people who tied it to what they built.

Garlef 4 hours ago | parent | prev | next [-]

I like both worlds: Tinkering and vibe coding.

My shift in perspective is really: Not all code deserves to be hand-crafted. Some stuff can be wonky as long as it does it's job.

(And I think the wonkyness will reduce in vibe-coding as harnesses improve)

simianwords 6 hours ago | parent | prev | next [-]

The split is about people who care about the commerce behind software development vs people who care about the craft.

Commerce camp understands tradeoffs needed in a competitive environment. Cutting corners where possible, not being dogmatic about unit tests and clean code an so on.

If you notice - the craft people rarely think about commerce because coding is an artistic expression.

wiseowise 5 hours ago | parent [-]

Your generalization are as useful as me saying that people in the first camp are just sloppy workers who dump their subpar code on others in hope that they’ll never need to touch it again.

furyofantares 14 hours ago | parent | prev | next [-]

Author doesn't care about their blog writing as craft, either (it's been fed through an LLM.)

lmorchard 13 hours ago | parent | next [-]

Sure, I ran the post past an LLM for some ideas on clarity and tightening it up - but I wrote, edited, and published it myself.

EagnaIonat 6 hours ago | parent | next [-]

If the core of the post is yours I think it is fine, but there are so many pieces which the LLM always uses in these things that they stand out more than emdashes.

Some examples.

> I've felt the grief too—but mine resolved differently than I expected, and I think that says something about what kind of developer I've been all along.

> I kept having this nagging sense that we were mourning different things.

> Here's what I notice about my grief: none of it is about missing the act of writing code. It's about the world around the code changing.

> If you're mourning the context—the changing web, the shifting career landscape, the uncertainty—that's real too, but it's more actionable.

It uses these kinds of patterns over and over that it becomes obvious Just go on LinkedIn.

"It's not X, its Y"

dang 8 hours ago | parent | prev | next [-]

It's increasingly clear that the LLMs leave more of a mark than authors realize when they run their writing through for a touching-up. This has been coming up a lot lately: https://news.ycombinator.com/item?id=47346449.

That's why readers end up reacting to the LLM imprints rather than the content.

I don't mean to be critical because it's a good article! But I bet if you shared the version before it was "tightened up", most of us would prefer it.

(I suppose I'd better add that this isn't a criticism of LLMs either - it's about figuring out how to use them well: https://news.ycombinator.com/item?id=47342045)

thinkingemote 4 hours ago | parent | prev | next [-]

It's the split. Many of us are mourning, you are not.

Maybe the stages of grief are not aligned yet. Or maybe it's as your post says there are two types of people.

furyofantares 13 hours ago | parent | prev [-]

It's too bad it has so much grating LLM-voice. I don't think you typed all the LLM-isms, and they make it hard to know how much to trust that the rest is what you intended to convey.

lmorchard 12 hours ago | parent [-]

I guess I have a grating LLM-voice, then, because I don't think it sounds particularly different than how I've written other posts.

furyofantares 11 hours ago | parent [-]

I've read through your comments on HN and you really don't. Comments and a blog post are different things but the difference in voice is stark. In your comments it's clear someone writing it cares about things.

suzzer99 13 hours ago | parent | prev [-]

Am I the only one who comes to the comments first to see if the blog/article is even worth reading?

kaffekaka 6 hours ago | parent | next [-]

It's classified.

hackable_sand 8 hours ago | parent | prev | next [-]

Yes

reverius42 7 hours ago | parent | prev [-]

No

bushido 3 hours ago | parent | prev | next [-]

Before AI, as a head of product (who has always written code), I did this thing where when I was thinking through an idea or a product direction, I built the solution three or four times before I found the shape and direction that I liked. And once I liked it, I put it on a roadmap for one or more of my teams to execute on.

Candidly saying before AI is a little disingenuous, because since AI has gotten better in the last year at coding, my workflow has gone back to exactly what it was when I had a 40-person team reporting to me.

I still go through three, four iterations before a final direction is picked. It still takes me two, three weeks to think through an idea. Three things have changed.

1. When I think of a possible direction, a new version gets spun up within minutes to a couple of hours, usually in a single shot. 2. I can work through more big ideas which require some amount of coding-based ideation than I could previously. 3. And when a direction is decided on, the idea comes in to deliver the outcomes at a much quicker pace. Previously, it could have been 1 month of ideation + 2-8 sprints, now it's 2-4 weeks of ideation and 1-2 days to final delivery.

All in all, while I can see where the author is coming from, the grief has been different for me.

I've had a lot of good developers, product managers, product owners, and designers that have had the privilege of helping develop their skills in the past. That was the necessity of ensuring that we were developing talent who would then go on to produce good work on our teams.

And I'm at a stage now where a three-person team that I have can produce more than the 40 could, and I am likely never going to need to develop the skills the way I used to. The loss is not from coding, I thoroughly enjoy how that's evolved. The loss is from the white space around it.

est 7 hours ago | parent | prev | next [-]

I wonder what happens if Claude was exploited by hackers and all chat logs were released one day.

Nearly every corp secrets would be instant leaked.

heavyset_go 6 hours ago | parent [-]

What happens when providers achieve AGI, as they claim, and then use what they've learned from your company's prompts, context, memory and outputs to put you out of business

Even without AGI, that internal source code, data and trade secrets are valuable on their own. Would suck if someone capitalized on it

totetsu 14 hours ago | parent | prev | next [-]

This reminds me of the divide between Role-players and Number-chasers in the once-upon-a-time MUD players communities.

nlawalker 13 hours ago | parent | prev | next [-]

>I think recognizing which kind of grief you're feeling is the actually useful thing here. If you're mourning the loss of the craft itself—the texture of writing code, the satisfaction of an elegant solution—that's real, and no amount of "just adapt" addresses it. You might need to find that satisfaction somewhere else, or accept that work is going to feel different. Frankly, we've been lucky there's been a livelihood in craft up to now.

The blog post is all about being clear-eyed about the source of grief, but doesn't seem to articulate that it's the livelihood that's gone, not the craft. There's never been a better time to practice the craft itself.

lmorchard 13 hours ago | parent [-]

Well, yeah, that's what a lot of folks are sad about - they can't practice the craft concurrently with the livelihood quite as much. But if you don't have a livelihood, you probably don't have as much space for craft at all.

rimunroe 11 hours ago | parent [-]

Exactly. I said this elsewhere in here, but I’ve felt extremely lucky that for the last 13 years I’ve gotten paid to do something that would otherwise have to be a hobby. The problem is that I have other hobbies already and am a parent with limited time to devote to such things in the first place. It’s valid to miss things you were extremely lucky to have in the first place.

RegW 13 hours ago | parent | prev | next [-]

I don't know how I feel about this. I started programming in 1979.

I went for a job in AI in the late 1980s and realised from the bonkers spin of the company founders that it really wasn't the 5 to 10 years away as I was being told. I went looking something that was going to deliver a result.

I came back to it maybe 6 years ago when while on the bench at a consultancy. I got into trying to do various Kaggle challenges. Then the boss got the bug and wanted to predict the answers to weird spurious money-making questions. I tried but even when there was good data, I didn't know how to do better anyone else. When there wasn't good data it just produced complete shit.

Since then the world has changed. Everything I touch has AI built in. And it's really good. When you don't know your way around something or you've got stuck it really gets you moving again. Yeah, if it regurgitates a stupid negative example from the documentation as if it is "the way to do it", you just ignore it because you have already read that.

Now, every week I'm subjected to lectures by people who don't know how to code about how productive AI is going to make me. Working in the financial sector every Californian pipe dream seems to be an imperative, but all must verified by an adult. My IDE tries to insert all sorts of crap into my production code as I type, and then I'm supposed to be allow it to generate my unit tests.

I know it will get better, but will it be another 5 to 10 years?

Are we 80% of the way there yet?

antonvs 13 hours ago | parent [-]

> Since then the world has changed. Everything I touch has AI built in. And it's really good.

Clearly you don’t use Amazon’s Alexa.

Roguelazer 14 hours ago | parent | prev | next [-]

The important thing to remember is that for a large number of people (in the US), "work" is a place where they do things that they hate for eight hours a day, for people they hate (surveys routinely show between 40% and 60% of people are "satisfied" with their jobs). Those of us who are in the tech industry because we like actually programming computers (the "craft-lovers", in the parlance of this blog post) have been lucky enough to have jobs where where we get to actually do something we enjoy (even if it's intermingled with meetings and JIRA). If AI slop really is the future and programming becomes as rare of a job as hand-building wood furniture, then most of us are going to be living the normal experience of capitalism in a way that we are probably not well-prepared for.

Personally, I have noticed that I still produce substantially more and better code than the people at my company spending all day writing prompts, so I'm not too worried yet, but it seems plausible at some point that a machine that stole every piece of software ever written will be able to reliably turn a few hundred watt-hours of of electricity into a hallucination-free PR.

sockgrant 13 hours ago | parent [-]

I agree some people go to work to work, and claude is find / good for them, but I feel that characterization of us who are loving claude is disingenuous. I’m a creative, while I loved coding and honed my craft, it was creating that always had me hooked. Claude is creating on steroids. Not to mention, it can help you massively improve your code cleanliness. All of the little nice-to-have features, the cleanups, the high unit test coverage, nagging bug fixes, etc., they’re all trivial to do now.

It’s not the same as writing code, but it’s fun.

If your coworkers can’t outpace your code output they’re either not using opus4.6 or they aren’t really trying.

It’s pretty easy to slam 20 PRs a day with some of them being complex changes. The hardest part is testing the diffs, but people are figuring that out too.

ccosky 12 hours ago | parent | next [-]

I have a suite of Claude skills all about craftsmanship. Refactoring, renaming, deconstructing god classes, detecting deleted code, etc. I've never written better, more readable, more maintainable code in my life than I have with Claude. All of that is trivial and hardly takes any time at all to accomplish.

Before moving to agentic AI, I thought I'd miss the craftsmanship aspect. Not at all. I get great satisfaction out of having AI write readable, maintainable code with me in the driver's seat.

dinkumthinkum 9 hours ago | parent [-]

But, would you feel that same satisfaction out on the street?

g-b-r 12 hours ago | parent | prev [-]

> it can help you massively improve your code cleanliness. All of the little nice-to-have features, the cleanups, the high unit test coverage, nagging bug fixes, etc., they’re all trivial to do now.

It can help if you write poor code without it, probably

High unit test coverage only means something if you carefully check those tests, and if everything was tested

sockgrant 10 hours ago | parent [-]

The only way Claude can help improve your code cleanliness is if you write poor code?

Code coverage means nothing if you didn't carefully check every test? "and if everything was tested" do you know what code coverage is?

not gonna engage the trolling

g-b-r 8 hours ago | parent [-]

> The only way Claude can help improve your code cleanliness is if you write poor code?

No? You assert that it writes better code than the average software developer?

> Code coverage means nothing if you didn't carefully check every test? "and if everything was tested" do you know what code coverage is?

Do you know?

Code coverage only tells what amount of the code gets *touched* by the tests.

To achieve code coverage it's enough to CALL the code, it doesn't tell you anything about the correctness of the tests: they could all end with a return true, and a code coverage tool would be perfectly happy.

So, yes, if you don't carefully check the test suite that the agent writes, it might well be worthless (or simply much less useful than you assume it to be, more realistically).

With "if everything was tested" I meant that you also need to check if the agent wrote all the tests that are needed, besides verifying that the ones it wrote are correct.

wiseowise 4 hours ago | parent [-]

> You assert that it writes better code than the average software developer?

Absolutely. It contains a lot, if not majority, of all the code available at our hands right now and can reason, whatever it means for LLMs to reason anyway, about it. It absolutely demolishes average software developer and it’s not even close.

> To achieve code coverage it's enough to CALL the code, it doesn't tell you anything about the correctness of the tests: they could all end with a return true, and a code coverage tool would be perfectly happy.

> So, yes, if you don't carefully check the test suite that the agent writes, it might well be worthless (or simply much less useful than you assume it to be, more realistically).

That’s like saying that if you don’t check every line your coworker writes it becomes worthless.

blobbers 14 hours ago | parent | prev | next [-]

Pointy haired bosses be looking for results.

Engineers be loving the craft.

It's a dance, but AI is unfortunately looking at us like we're dancing, and meanwhile it's built a factory.

dude250711 14 hours ago | parent | prev | next [-]

I just do not want to deal with other people's AI-generated code.

HoldOnAMinute 13 hours ago | parent [-]

Your AI can rewrite it to your own standards for free.

You can tell your AI to read their code, and create a new requirements document for a clean-room implementation.

Then you have your AI implement using your own requirements document.

toraway 13 hours ago | parent | next [-]

That doesn't sound like it would help while reviewing a PR full of verbose slop in your day job.

g-b-r 12 hours ago | parent | prev | next [-]

That sounds so reliable, I'm so confident it would get it right

We're so screwed

skydhash 13 hours ago | parent | prev [-]

In a review process?

jaredcwhite 9 hours ago | parent | prev | next [-]

I'm just not buying this framing. At all.

I'm not sure what it is I'm supposed to be mourning. I'm using my skills and continuing in my craft the way I have for several decades and the way I will continue to for several more. I eschew the LLMs not because they are threatening to me, but because they are unsound products built & promoted by people who are fundamentally sociopathic.

If I am to mourn, I can mourn the unveiling of deep ethical lapses across the entire tech industry. They were clearly there already, we just didn't realize that if you were to put any random assemblage of techies into a room, a decent handful of them are sadly unethical people lacking a moral compass. We know that now. They love LLMs, because they love power and they dislike having to forgo perceived "utility" by recognizing the importance of caring for others in a community.

While they do their utmost to demolish craft & artistry & tradition, I will be doing my utmost to preserve & defend all of those things. I am no stranger to boycotts, and I certainly don't suffer from FOMO. And I'm thankful I know a whole lot of people who feel much as I do.

mllev15 8 hours ago | parent | next [-]

Completely agree. This is the framing people use to cope with their moral and psychological failure. Using tools that are literally scorching the earth just so they don’t have to use their brains anymore.

wulfstan 6 hours ago | parent | prev [-]

100% with you. I start a new job on Monday and I intend to keep building great young engineers who love their craft and their community. Enough of this vampiric unethical horse manure.

CharlieDigital 14 hours ago | parent | prev | next [-]

The divide is a matter of perspective.

I'm a 23+ year dev; among the highest level ICs in my org.

It's still craft, its just that the craft is different. I don't write *.ts, *.cs files anymore; I write *.md files that other devs are using, that we're using as guardrails, that ensures that we minimize the slop while increasing speed and basically lift every developers level up by several notches.

I went from building one kind of framework/platform level artifact to another type of framework/platform level artifact.

If one's perspective is that it's just a shift in what "craft" means, then it's still craft. I'm still building systems; just a different kind of system.

jacquesm 14 hours ago | parent | next [-]

You're using it as a 'super compiler', effectively a code generator and your .md file is the new abstraction level at which you code.

But there is a price to pay: the code that you generate is not the code that you understand and when things go pear shaped you will find that that deterministic element that made compilers so successful is missing from code generated from specs dumped into an AI. If you one-shot it you will find that the next time you do this your code may come out quite different if it isn't a model that you maintain. It may contain new bugs or revive old ones. It may eliminate chunks of the code and you'll never know and so on.

There is a reason that generated code always had a bit of a smell to it and AI generated code is no different. How much time do you spend on verifying that it actually does what's written on the tin?

Do you write your own tests? Do you let the AI write the tests and the code? Are you familiar with the degree to which AIs can be manipulated to do stuff that you thought they weren't supposed to? (A friend of mine just proved this to his boss by bribing an AI with a 'nice batch of pure random data' to put a piece of unreviewed code into production by giving itself the privileges required to do so...)

CharlieDigital 14 hours ago | parent | next [-]

We have human reviews on every PR.

Quality and consistency are going up, not down. Partially because the agents follow the guidance much more closely than humans do and there is far less variance. Shortcuts that a human would make ("I'll just write a one-off here"), the agent does not...so long as our rules guide it properly ("Let me find existing patterns in the codebase.").

Part of it is the investment in docs we've made. Part of it is that we were already meticulous about commenting code. It turns out that when the agents stumble on this code randomly, it can read the comments (we can tell because it also updates them in PRs when it makes changes).

We are also delivering the bulk of our team level capabilities via remote MCP over HTTP so we have centralized telemetry via OTEL on tool activation, docs being read by the agents, phantom docs the agent tries to find (we then go and fill in those docs).

jacquesm 14 hours ago | parent | next [-]

> We have human reviews on every PR.

There are some studies about maintaining attention over longer periods of time when there is no action required. It will be difficult to keep that up forever so beware of review fatigue and bake in some measures to ensure that attention does not diminish over time.

CharlieDigital 14 hours ago | parent [-]

The point of reviews is that the process of reviews is a feedback cycle where we can identify where our docs are short. We then immediately update the docs to reflect the correction.

Over enough time, this gap closes and the need for reviews goes down. This is what I've noticed as we've continued to improve the docs: PRs have stabilized. Mid-level devs that just months ago were producing highly variant levels of quality are now coalescing on a much higher, much more consistent level of output.

There were a lot of pieces that went into this. We created a local code review skill that encodes the exact heuristics the senior reviewers would use and we ask the agent to run this in AGENTS.md. We have an MCP server over HTTP that we use to deliver the docs so we can monitor centralized telemetry.

The objective is that at some point, there will be enough docs and improved models that the need for human reviews decreases while quality of code reaches a steady state that is more consistent than any human team of varying skill level could produce.

One thing we've done is to decouple the docs from the codebase to make it easier to update the docs and immediately distribute updates orthogonal to the lifecycle of a PR.

(I'll have a post at some point that goes into some of what we are doing and the methodology.)

g-b-r 12 hours ago | parent [-]

> The objective is that at some point, there will be enough docs and improved models that the need for human reviews decreases while quality of code reaches a steady state that is more consistent than any human team of varying skill level could produce

There will never be a point when human reviews will be less needed; you're doomed to ship something horribly insecure at some point, if you ever remove them; please don't.

AnimalMuppet 14 hours ago | parent | prev [-]

> Partially because the agents follow the guidance much more closely than humans do and there is far less variance.

Ouch. Managing human coders has been described as herding cats (with some justice). Getting humans to follow standards is... challenging. And exhausting.

Getting AIs to do so... if you get the rules right, and if the tool doesn't ignore the rules, then you should be good. And if you're not, you still have human reviews. And the AI doesn't get offended if you reject the PR because it didn't follow the rules.

This is actually one of the best arguments for AIs that I have seen.

CharlieDigital 14 hours ago | parent [-]

Yes, as I mentioned in my other replies, what I've seen is that quality has gone up and coalesced around a much higher bar with far less variance than before as we've refined our docs and tooling.

In some cases, it was "instant"; dev's MCP server connected to our docs was down -> terrible PR. We fix the MCP connection and redo the PR -> instantly follows the guides we have in place for best practices.

operatingthetan 14 hours ago | parent | prev [-]

>A friend of mine just proved this to his boss by bribing an AI with a 'nice batch of pure random data' to put a piece of unreviewed code into production by giving itself the privileges required to do so...

Okay that's pretty hilarious. Everyone has a vice!

jacquesm 14 hours ago | parent [-]

There is a chapter two to the story but I don't want to out my friend. You never know who reads HN.

sifar 9 hours ago | parent | prev | next [-]

You are building a system that has a shaky mental model and which may or may not adhere to the *.md files.

wiseowise 4 hours ago | parent | prev | next [-]

> I write *.md files that other devs are using, that we're using as guardrails, that ensures that we minimize the slop while increasing speed and basically lift every developers level up by several notches.

> > among the highest level ICs in my org.

Checks out. Real straight shooter with upper management written all over him.

tern 14 hours ago | parent | prev [-]

Came here to say something similar. For me, the craft aspect is now even more exciting because I can craft more ambitious things without getting bogged down in the details. For me, refining my conceptual model, drawing diagrams, finding the right way to think about something was the craft.

Maybe that's another way of saying: I was trained as a designer, and now the distinction between design (read: architecture, service-design, product, ux, cx) and programming is blurring.

sockgrant 13 hours ago | parent [-]

Heck yeah! Love that way of putting it. Agree. Now there’s more time to focus on making the right architecture and carrying it out. It’s no longer a days long task to do a big refactor to remove code smells.

frankc 14 hours ago | parent | prev | next [-]

I think it's more granular than this, though. I also like to "make computer do thing" and have enjoyed using AI. But I also like building systems, optimizing systems. I find AI is a great partner in that. I can churn out prototypes more quickly, iterate on them more quickly etc. That also applies intra-system level. I might have a theory about how a different data structure or caching layer will affect application performance. It's now so much faster to test those kind of theories, and actually building good scaffolding around them to test them scientifically.

Yes, sometimes I can also ask AI to evaluate things at the system level and it often has surprisingly good insights, but that is usually a collaboration where our powers combined comes up with a better solution. I enjoy that process, too.

I do sympathize with the people "in mourning". I feel like this is really about how your identify is tied up in what you do. I have generally identified as a command line wizard. The xkcd of the guy flying in with "perl" very much speaks to me. But AI absolutely crushes at this. It's not that useful a skill anymore. Now I identify more as a local AI expert instead :D

beej71 13 hours ago | parent [-]

> I feel like this is really about how your identify is tied up in what you do.

This is it for me. One thing that's important to my identify making things. And I have a lot of trouble saying I made a thing that I asked someone (or something) else to make for me.

I know you're going to say, "But I'm making things, too!" However...

I could crank out a project a day with Claude Code and slap them all up on GitHub for my green squares, and I could say that I made them all.

Just like I could crank out a novel a day with ChatGPT and say that I made them all.

Or I could use it to write 100 blog posts a day and say that I made them all.

In all those cases, I caused things to get made. But did I make them? I don't feel like I can honestly say I did. (And the copyright office is starting to have a thing or two to say about it, as well.)

This is what I struggle with. I like making things.

As a capitalist, sure, your cash is good with me. Tell me where to shoot and I'll shoot. But in terms of keeping my soul fed, it's a tough one.

somewhereoutth 3 hours ago | parent | prev | next [-]

Really the distinction is between those that can see the bigger picture, and those that only see what is in front of them. Of course the immediate gratification of LLMs appeal to the latter, whereas the former is all too aware of the systemic downside of such automated content generation.

gneuron 12 hours ago | parent | prev | next [-]

This is the defining divide of AI, period. Whether you're a craft lover of art, writing, music, code, hell, business processes and the idea of "doing work." There are those who love the craft, and those who want the result of the craft. AI is a faster path to that end result (whether you're happy with that result is another matter). From that POV, it could lead to us speed-running our civilization into another era; abundant prosperity, or full on collapse. Bro...

fellowniusmonk 6 hours ago | parent | prev | next [-]

I love developing clever algorithms and writing elegant code. It's a hobby of mine and it makes me happy.

I love shipping tangible products because it makes others happy and makes me money.

Do what you love for work and you'll never love anything again.

Do what you love for a hobby and keep it pure.

Don't let either be your identity, you only diminish yourself and grow old in the doing.

kace91 14 hours ago | parent | prev | next [-]

Lots of mentions of the term mourning... As they say in my country, don't sell the skin until you kill the bear.

All I'm seeing around me is people dropping best practices in a FOMO driven push for speed: let's stop reviews, let's drive 5 agents in parallel, let's not even look at the code!

This is going to blow up.

Only after we pick up the remains we'll find a more sustainable approach for AI usage. I suspect that version will still require crafters.

If we end up in a place where the craft truly is dead, then congratulations, your value probably just dropped to zero. Everyone who's been around startup culture knows the running jokes about those 'I have a great idea, I just need someone to code it' guys. Now you're one, and you'll find how much ideas are worth.

LPisGood 13 hours ago | parent | next [-]

> If we end up in a place where the craft truly is dead, then congratulations, your value probably just dropped to zero

I think, then that the value of all knowledge work will have dropped to zero. Software engineering is, to my mind, “intelligence complete.” If you can do it with knowledge work, you can have software do it.

agentultra 13 hours ago | parent [-]

That’s not the point of nor the reason for knowledge work.

The fundamental mechanism of knowledge work is people. They haven’t changed at all. And what they need to understand and learn hasn’t changed. All the agents in the world and all of the methane guzzling data centres can’t tell you what to write in the specification nor if what the computer has generated faithfully implements that specification.

skydhash 13 hours ago | parent | next [-]

Yep. Most knowledge work is about coordination between people and transporting the right information to the people that are thinking and the people that are doing (not always separate groups, but can be one group switching mode). You need people because there's a lot of shared context between individuals in society that is not encoded anywhere.

dd8601fn 12 hours ago | parent [-]

A year ago everyone was sure these things couldn’t write functional code. A few months ago people started saying they need to be operated by people who could otherwise write the code.

It sounds like we’re headed towards… the guy in Office Space who took specifications from the customer via a secretary and gave those to the engineers (and we know what happened to him).

But I’m not sure that’s a thing, at least for long, either. The original super power of these things wasn’t that they could write code. It was that they could very competently extract meaning from natural language, debug what you were saying from the terrible way you expressed it, and still formulate competent answers.

That doesn’t sound like a comfortable place for former devs to sit for the next few years.

skydhash 12 hours ago | parent [-]

> A year ago everyone was sure these things couldn’t write functional code

Even ChatGPT could write code when it came out.

> It was that they could very competently extract meaning from natural language, debug what you were saying from the terrible way you expressed it, and still formulate competent answers

“Competent” is doing a lot of work here. If it were so, AI woul take change requests directly from the business side and put the implementation immediately in production. But instead, all you see are FOMO propaganda to get devs to adopt the tool with no asking if it actually helps the devs do their job.

carlmr 12 hours ago | parent [-]

>But instead, all you see are FOMO propaganda to get devs to adopt the tool with no asking if it actually helps the devs do their job.

If (big if) LLMs/AI take over all of knowledge work the first thing you'll notice is that the first company getting to the point of automating all knowledge work will close off their models to the public, not advertise it, and take over every business on the planet.

You wouldn't waste a dime on advertising, influencers, or convincing people to use your product.

Taking over every business in the world seems more lucrative than selling $20 subscriptions to people.

ytoawwhra92 11 hours ago | parent [-]

There's not going to be a single point at which that happens.

More likely what we will see (if this happens) is AI companies entering close partnerships with other businesses, building up their models ability to do that sort of work, then either acquiring their partner or directly competing with them.

Similar to how Apple monitors developers having success on their platform and then launches a first-party offering.

carlmr 7 hours ago | parent [-]

This might be happening, too, however B2C advertising and heavy astroturfing is a sure sign that they don't even think they're close to this goal.

The average consumer pays the least for subscriptions and asks most uninteresting questions to the AI in terms of gaining insight. The only goal here can be upholding the narrative that everything will be AI soon™.

satisfice 13 hours ago | parent | prev [-]

Exactly right.

I have an idea for an “evidence editor.” Claude is waiting for me to tell it exactly what I want this thing to be. But I don’t know. I haven’t figured out how to square the various circles, even in my fantasies. Until I do, Claude sits and waits. And waits…

ianm218 13 hours ago | parent | prev | next [-]

> If we end up in a place where the craft truly is dead, then congratulations, your value probably just dropped to zero

I think the craft is going to die and am not thrilled about it. I dont feel like there is a contradiction there

Ferret7446 11 hours ago | parent [-]

There's no contradiction, but if/when it happens, being "not thrilled" will overflow off the bottom of your list of concerns

amarant 14 hours ago | parent | prev | next [-]

The beginnings of that sustainable approach is already out there: https://boristane.com/blog/how-i-use-claude-code/

RayVR 14 hours ago | parent | prev | next [-]

These bear related sayings always make me laugh. The one I was told by a Russian: “don’t argue over how to skin the bear before you’ve killed it”

scuff3d 13 hours ago | parent | prev | next [-]

A couple of guys at work have been raving about Claude. How quick they get stuff done, how great the code is, how working any other way is a waste of time.

I just had the misfortune today to wade into one of their codebases. It's 60k lines of code for something that should have been simple, and it's an absolute fucking mess. I'm gonna have to rip out most of it and start over just to get it to do what we actually need it to do.

I use LLMs, they come in handy, and I use agents, but this "have agents do everything" nonsense is a disaster, and it's only going to get worse.

On the upside I'm getting paid to fix this shit show.

Ferret7446 11 hours ago | parent | next [-]

Devil's advocate: perhaps you are holding a hammer complaining about rivets. If you used AI to interact with the code instead, you wouldn't have to wade through the mess and might have gotten what you needed fairly easily, except you're using the wrong tool for the job

scuff3d 10 hours ago | parent [-]

Bad code is bad code, doesn't matter how it was generated.

antonvs 13 hours ago | parent | prev [-]

> On the upside I'm getting paid to fix this shit show.

A lot of my career has been this, not due to choice but circumstance. Startups write terrible code, in general. Enterprises write terrible code. I’ve worked with both. If it becomes important enough, someone has to fix it at some point.

Current AI models seem to be job security machines for that kind of work.

operatingthetan 14 hours ago | parent | prev | next [-]

>This is going to blow up.

We are way past wringing our hands over agentic engineering. Every startup and all fast moving companies are onboard. They don't hand code anymore. There will not be some code quality crisis that will stop everyone in their tracks. I'm trying to cope with this too, but I don't think the best path is praying for failure.

kace91 14 hours ago | parent | next [-]

Just out of the popularity of the claim, I’ll bite.

Both big tech and startups are now full of people working at 10x, features are written as fast as PMs can think them, monoliths self heal with agents buzzing over them.

10x means 10 times the outcomes in a given amount of time, so did you see the last iOS version pack a decade worth of features in a single release?

Do you remember when meta moved their backend to rust in a month?

What about Microsoft software not having a single bug in a year?

Yeah, me neither.

operatingthetan 14 hours ago | parent | next [-]

I didn't say anything about increased productivity or 10x. Feel free to revise your strawman.

kace91 13 hours ago | parent | next [-]

Fair, let’s revise it then.

If not productivity, what’s the result AI is getting that is disruptive enough to make our previous work obsolete?

SpicyLemonZest 13 hours ago | parent [-]

At 11 this morning, I wanted to both debug an issue and take a meeting before lunch. Before AI, I would have had to just start debugging after lunch, there wouldn't have been enough time to do both. But now I had Claude debug the issue concurrently with the meeting. Its answer didn't actually make sense (I still do think I'm smarter than Claude, although the gap is narrowing!), but it showed enough of its work that I could make a good guess about what was really going on, and when I asked it to check my hypothesis I got back from lunch with some debug logs that confirmed I'd found the bug.

kace91 13 hours ago | parent [-]

I can easily believe that, I agree Claude has applications.

I am disputing the idea that this is enough of a game changer to make us mourn our now lost craft. Also, I’m mentioning that we’ve discovered a world of footguns dressed as shortcuts, which we’re not taking proper care of.

First, your experience was required for that story to have a happy ending. Second, we both know someone else could probably have gone with Claude’s senseless hypothesis, asked for a fix and sent it for review. This last part is becoming pretty universal.

afavour 13 hours ago | parent | prev [-]

If there’s no increased productivity then what’s the point in spending all the money?

operatingthetan 13 hours ago | parent [-]

I didn't say there was or there wasn't. They just don't get to infer that I did and then attack that as my position.

sifar 12 hours ago | parent [-]

What is your position? Genuinely asking as someone who is similarly trying to cope and doesn't want to travel down the road being trampled on. Primarily because it doesn't make me better, it doesn't benefit me as an individual and takes the joy out from understanding things.

jorl17 13 hours ago | parent | prev [-]

10x is definitely possible at a startup level. I suppose not in a big tech world (seems obvious to me, and it's not like development speed was the bottleneck there either, right?)

You can choose not to believe what I say (and I genuinely understand if you do), and I can simply keep on doing it. I'm not taking it out of thin air either. tuesday I did in 8h the work I scheduled for roughly 65h. Ok, so maybe it's not 10x, maybe it's 8x, same ball park.

And that's only talking about development. If I now get into other aspects....I have just spent the last 1.5 hours creating an incredibly detailed backcatalog of tasks and epics. This is the most detailed I have ever done so in my life and it has been working very well. It's like we merged the good of highly-detailed waterfall with the speed of agile.

Tee-hee: Watergile. (I'm sure some expert in the field will let me know I have coined a new term for something that very much has a name; excuse my ignorance in advance).

Nonetheless, I did this all by talking to the computer which is interfacing with my project management tools, the project documentation, and the project code. Full context on everything. In the past, I would have taken 3 or 4 days to create the same amount of tasks with a vaguely similar amount of detail. But, in truth, I wouldn't have spent so much time putting this love into the craft (!!) of planning a project, because it would exhaust me and feel like a waste of time.

Don't get me wrong, I totally see shit code being thrown everywhere by inferior AI models or people who can't tame the beasts, but the right people in my life are _clearly_ building out more, better tested code, and actually built with more care. Maybe it's not at the line-by-line level, but it certainly is from the end-product result (thinking of the actual end-user). I accept your mileage may vary -- this is my very personal experience.

Maybe it'll stop happening, who knows. Maybe price will be prohibitive, or maybe we'll have such an avalanche of ideas that weren't worth building that everyone will be overwhelmed and take a step back. Or maybe we won't develop juniors into the seniors of tomorrow. Or maybe everything will indeed implode once products are large enough that the original development speed can't be maintained anymore and expectations are mismatched.

What I do know is that it is definitely happening in my world, and I haven't had this much fun since I was a little kid learning to code.

the__alchemist 13 hours ago | parent [-]

Do you have testable hypothesis for how the 10x will manifest? I.e., is there a way we could (coarsely) measure this in a year or two from now?

jorl17 13 hours ago | parent | next [-]

That's a very good question. Some metrics I'm watching or considering watching right now:

- Amount of leads we're taking in (per unit of time)

- Sprint velocity changes (task complexity should stay roughly the same with AI, and team velocity increase — we've been seeing this happening)

- Hire rates (more sales people, less developers?)

- Number of projects per unit of time (of similar dimension, hard to measure)

- Length of "bugfixing buffer" before big releases (we've actually been noticing this go down)

- Another way of saying it is: number of bugs, or bugs per feature

- Drift between planned execution time and actual execution time (we've been delivering early...but I guess we'll soon adjust our estimates...or maybe not, who knows?)

- Spend on AI models

- I can't measure this, but I can sort of "feel it": but the overall feedback we get from clients, the feeling we get from them.

- Number of tests (tests have skyrocketed. Can't be sure about the quality, but, hey, it's a metric)

- Feature turnaround time (how long since "feature is proposed" until it's actually implemented)

- documentation to code ratio (not sure what we'll make of it, but there's a somewhat worrying trend here)

- team balance: is everyone slowly becoming fullstack? Do we feel that those who aren't are significantly affecting development speed? if so, that indicates that the other ones are somehow moving faster

I can't really think of any others, but I'm sure they exist.

AnimalMuppet 13 hours ago | parent | prev [-]

The future will not be evenly distributed. You can't expect to see it in the productivity of the industry as a whole, or even the productivity of a large company. You might be able to see it in a medium-sized team if you measure carefully.

jorl17 13 hours ago | parent | next [-]

I think I agree with you. I didn't mean to imply that big corps will build faster (or more) -- in fact I said I don't expect them to right now, and I'm not so sure that will change.

What I believe is that early prototype development and pivoting is insanely fast now. And if you find excellent engineers who are also great product people, and then pair them with people who have truly great ideas, many wonderful new products will emerge.

the__alchemist 13 hours ago | parent | prev [-]

So, it sounds like measuring directly in individual individuals or companies might be tough. (Unless the company is medium-sized?) Maybe we could look for broader trends in the economy and beyond. What sorts of companies will this manifest in? I.e. mostly "tech" companies, or beyond?

autoexec 13 hours ago | parent | prev | next [-]

> I don't think the best path is praying for failure. Embrace it

"Embrace failure" is exactly the attitude every company is going to take. They've already been working at bottoming out our expectations.

We should have been running companies out of business with regulations and abandonment when their human-written software leaked our private data to criminals, or when their untested forced-updates shut down our systems and sent our IT teams scrambling, or when their unoptimized code forced us to upgrade our hardware or negated any performance gains we should have seen from investing in upgrades.

The quality, reliability, and security of the software we all use and depend on is going to nosedive, and companies already know they can get away with it. They aren't going to start caring about how we feel about that now. "Pay more and settle for less" is where we are today. "Embrace Failure" is the future we're sprinting towards.

Roguelazer 14 hours ago | parent | prev | next [-]

That's absolutely not true. The places that have embraced "agentic engineering" are mostly garbage factories, and lots of places, including plenty of startups and fast-moving companies are staying off of this trend. I recognize that most of the people on this site are just trying to self-promote for their own gig, but the level of misinformation is sometimes just staggering.

burningChrome 13 hours ago | parent | next [-]

Want something to be terrified of?

I work at a massive health care company. They're 100% on the AI bandwagon and are putting AI everywhere they can. Billing, Software, DevOps, everywhere. If you think you can give an Agent some information and have go to work for some user, its 100% on the table for the company to do and either a) then outsource the rest offshore or b) lay the person off or shrink the department to increase the bottom line.

Your healthcare, right now, is being offloaded to AI agents and bots and this is only the beginning.

lp0_on_fire 12 hours ago | parent [-]

I literally just sat through the annual “choose your healthcare” plan bullshit and the “meeting” was literally one of the Hr people pulling up a power point narrated by “AI”. You could tell in the first ten seconds.

You’d think our plans would be cheaper given they’re offloading all this work to agents they don’t have to pay a salary to…right?

operatingthetan 14 hours ago | parent | prev | next [-]

>lots of places, including plenty of startups and fast-moving companies are staying off of this trend.

Provide some examples then? Everyone who is all in on agentic code are pretty vocal about it. Who is declaring the opposite stance? Anyone?

VoidWarranty 13 hours ago | parent [-]

Both claims are hyperbole.

Reality remains in the middle, but there are plenty of examples of either extreme right now.

k32k 9 hours ago | parent | prev | next [-]

Indeed, I feel this place has gone insane. There's no balance here.

You've got boosters and then you've got people who are panicking/fighting against anything pro-AI.

sothatsit 14 hours ago | parent | prev [-]

It is not just startups or small companies embracing agentic engineering… Stripe published blog posts about their autonomous coding agents. Amazon is blowing up production because they gave their agents access to prod. Google and Microsoft develop their own agentic engineering tools. It’s not just tech companies either, massive companies are frequently announcing their partnerships with OpenAI or Anthropic.

You can’t just pretend it’s startups doing all the agentic engineering. They’re just the ones pushing the boundaries on best practices the most aggressively.

techpression 14 hours ago | parent | prev | next [-]

Outwards communication and inside results tend to differ vastly. I’ve heard some true horror stories already from companies who claim they’re doing amazing things with great results. You should be especially on guard if it’s a publicly traded company, selling AI usage is necessary to appease the market (and thereby C-level stock value).

operatingthetan 14 hours ago | parent [-]

>Outwards communication and inside results tend to differ vastly.

This is a good call out, but I'm talking to a lot of friends at other companies. So my perspective is informed by both news and personal anecdote.

techpression 14 hours ago | parent [-]

Sure, it goes both ways, I’m having great results at the startup I’m working at too.

guelo 14 hours ago | parent | prev [-]

Well nobody has had to pay the tech debt yet on the last 6 months of that insanity. I think the age-old SWE best practices will still hold in time.

bigwheels 14 hours ago | parent | prev [-]

I was still skeptical at the start of this year, but there seems to be a shift underway. Found the StrongDM Dark Factory docs in Feb and they've netted novel results that have been inspiring enough to keep studying and practicing.

https://factory.strongdm.ai/techniques

https://factory.strongdm.ai/products/attractor

If you've found better or ancillary resources, please share.

yoyohello13 13 hours ago | parent [-]

Wow! My productivity increased 100x after i stated listening to ad bots.

epolanski 13 hours ago | parent | prev | next [-]

There's no divide.

Brilliant engineers, among the best software craftsmen out there are using AI daily and speeding up their processes.

The author of Redis, antirez, stated a month ago he spent 2 weeks on Redis tinkering with LLMs...and it was just design phase, not a single line of code was authored. The ability to interrogate LLMs and have them criticize his ideas and edge cases sped up his process by month.

He also used LLMs successfully to find multiple issues in Redis that would've took him longer to do without.

I myself spend with AI way more time tinkering and gathering information than authoring code.

Am I a craft lover or a result chaser?

But sure, let's keep everything in the divide conservative vs liberal, black and white, craftsman vs vibe coder...give me a break..

antonvs 9 hours ago | parent [-]

Yeah. We’re seeing a lot of posts from people dealing with their emotions about AI, and trying to rationalize those emotions. Blaming a straw-manned group of other people who supposedly don’t share some quality the author values is an easy way to do that rationalization. The reductively binary classification is a sign that they’re indulging in something other than a serious analysis.

umanwizard 13 hours ago | parent | prev | next [-]

People who say directing an AI is just "moving up another level of abstraction" are missing the point that it's a completely different kind of work. Everything from machine code to Haskell is a predictable deductive logical system, whereas AIs are not.

dang 8 hours ago | parent [-]

It's different, but it isn't completely different. That's one reason why it's hard to make sense of this change.

keybored 14 hours ago | parent | prev | next [-]

Every little minor dispute can be split into some arbitrary dichotomy which is vaguely defensible. Not interesting.

Twelve years ago I would have the bright idea of why not make a little, just a tiny little (what I would call now) preprocessor for Java which does the same thing in less characters and is clearer. Everyone would love it. Of course no one loved it. Well, I never implemented it. Because I got some sense: you can’t just make tiny little preprocessors, a little code generation here and there, just code-generate this and tweak after the fact. Right? It’s not principled.

You can cook up a dichotomy. Good for you. I think the approach is just space age technology meets Stone Age mindset. It’s Flintstone Engineering. It’s barely even serious.

I am not offended that you took my craft. I am offended that you smear paint on the wall with three hundred parallel walls and painters and pick the best one. Or whatever Rube Setup is the thing that will take over the world as of thirty minutes ago.

Make something rock solid like formal verification with LLM assist (or LLM with formal verification assist?). Something that a “human” can understand (at this point maybe only the CEO is left). Something that is understandable, deterministic.

I might be out of a job. But I will not be offended. And I will respect it.

keybored 4 hours ago | parent [-]

Now about the grief angle. This is AI Inevitability Soothsaying.[1]

It’s all just a backdrop for hammering home the same inevitabilism: GenAI, GenAI, GenAI. Just slap on whatever excuse to hammer this over, and over, and over. Grief, self-identity, some other pseudo-humanistic angle.

Now is the time that programmers talk about their feelings. Give me a break.

Because the AI hype machine isn’t content with just eventually taking your job or your craft. It can’t just quietly get exponentially better until it sweeps your legs effortlessly. No, because there’s also a market out there, and a hype needs to be built. So now you need to see it all day in your tech news aggregator. Just push all the interesting stuff out. Replace with autopilot.

No, really. Even if AI worked perfectly right now you would still need to have a constant churn of content about how to babysit this thing that speaks English already and is more capable than you. I guess it’s kind of paradoxical.

[1] https://news.ycombinator.com/item?id=46935607

sdevonoes 13 hours ago | parent | prev | next [-]

It’s sad not because of AI itself but because of the companies behind AI: we are now paying for every single line of code we produce. That sucks

Freak_NL 4 hours ago | parent [-]

Weird you got downvoted for that. This is exactly the thing which has been bothering me about all of this.

Pre-LLM there are paid products and licenced stuff, but for the most part you could code in any language using free or community edition IDE's and mostly open toolchains. The total requirement for me as an individual to start using some language or stack is owning a computer and having internet access. Both provided by a stable market with consumer choice.

Post-LLM there is now this blackbox of a service which you depend on and for which someone is picking up a not-insignificant tab where the costs currently seem massively subsidised, and which is getting to be a requirement for your skill set. Open local models? Fine, but who is training them? How will those stay up-to-date?

Oh, and then there is the not-quite-insignificant ecological aspect and that bit where the powers-that-be seem to have collectively decided that copyright doesn't really apply here.

bonkabonka 14 hours ago | parent | prev | next [-]

Yow, submitter sure isn't shy with their bias. Maybe defang the title?

dang 8 hours ago | parent [-]

Submitted title was "The AI coding divide: craft lovers vs. result chasers" - which does seem to be a fair statement of what the article is about, and in that sense not so biased.

We've reverted the title to be that of the article now, though, in keeping with the site guidelines, since it was neither misleading nor linkbaity.

https://news.ycombinator.com/newsguidelines.html

elliotbnvl 14 hours ago | parent | prev [-]

Strong agree. Needs another pass or two at editing though, some painful LLM-os sticking out there :'(

lmorchard 13 hours ago | parent [-]

Which bits? And don't say the em-dashes because I've been over-using them since high school