Remix.run Logo
simonw 3 days ago

Something I like about our weird new LLM-assisted world is the number of people I know who are coding again, having mostly stopped as they moved into management roles or lost their personal side project time to becoming parents.

AI assistance means you can get something useful done in half an hour, or even while you are doing other stuff. You don't need to carve out 2-4 hours to ramp up any more.

If you have significant previous coding experience - even if it's a few years stale - you can drive these things extremely effectively. Especially if you have management experience, quite a lot of which transfers to "managing" coding agents (communicate clearly, set achievable goals, provide all relevant context.)

yason 3 days ago | parent | next [-]

I don't know but to me this all sounds like the antithesis of what makes programming fun. You don't have productivity goals for hobby coding where you'd have to make the most of your half an hour -- that sounds too much like paid work to be fun. If you have a half an hour, you tinker for a half an hour and enjoy it. Then you continue when you have another half an hour again. (Or push into night because you can't make yourself stop.)

lmorchard 3 days ago | parent | next [-]

What you consider fun isn't universal. Some folks don't want to just tinker for half an hour, some folks enjoy getting a particular result that meets specific goals. Some folks don't find the mechanics of putting lines of code together as fun as what the code does when it runs. That might sound like paid work to you, but it can be gratifying for not-you.

chung8123 3 days ago | parent | next [-]

For me it all the build stuff and scaffolding I have to get in place before I can even start tinkering on a project. I never formally learned all the systems and tools and AI makes all of that 10x easier. When I hit something I cannot figure out instead of googling for 1/2 hour it is 10 minutes in AI.

mbirth 3 days ago | parent [-]

The difference is that after you’ve googled it for ½ hour, you’ve learned something. If you ask an LLM to do it for you, you’re none the wiser.

qudat 3 days ago | parent | next [-]

Wrong. I will spend 30 minutes having the LLM explain every line of code and why it's important, with context-specific follow-up questions. An LLM is one of the best ways to learn ...

Akronymus 2 days ago | parent | next [-]

So far, eqch and every time I used an LLM to help me with something it hallucinated non-existant functions or was incorrect in an important but non-obvious way.

Though, I guess I do treat LLM's as a last resort longshot for when other documentation is failing me.

naasking 2 days ago | parent | next [-]

Knowing how to use LLMs is a skill. Just winging it without any practice or exploration of how the tool fails can produce poor results.

b112 2 days ago | parent [-]

"You're holding it wrong"

99% of an LLM's usefulness vanishes, if it behaves like an addled old man.

"What's that sonny? But you said you wanted that!"

"Wait, we did that last week? Sorry let me look at this again"

"What? What do you mean, we already did this part?!"

naasking 2 days ago | parent [-]

Wrong mental model. Addled old men can't write code 1000x faster than any human.

b112 2 days ago | parent [-]

I'd prefer 1x "wrong stuff" than wrong stuff blasted 1000x. How is that helpful?

Further, they can't write code that fast, because you have to spend 1000x explaining it to them.

naasking a day ago | parent [-]

Except it's not 1000x wrong stuff, that's the point. But don't worry, the Amish are welcoming of new luddites!

oblio 2 days ago | parent | prev [-]

Which LLMs have you tried? Claude Code seems to be decent at not hallucinating, Gemini CLI is more eager.

I don't think current LLMs take you all the way but a powerful code generator is a useful think, just assemble guardrails and keep an eye on it.

Akronymus 2 days ago | parent [-]

Mostly chatgpt because I see 0 value in paying for any llm, nor do I wish to gice up my data to any llm provider

Anamon 2 days ago | parent | next [-]

Speaking as someone who doesn't really like or do LLM-assisted coding either: at least try Gemini. ChatGPT is the absolute worst you could use. I was quite shocked when I compared the two on the same tasks. Gemini gets decent initial results you can build on. ChatGPT generates 99% absolutely unusable rubbish. The difference is so extreme, it's not even a competition anymore.

I now understand why Altman announced "Code Red" at OpenAI. If their tools don't catch up drastically, and fast, they'll be one for the history books soon. Wouldn't be the first time the big, central early mover in a new market suddenly disappears, steamrolled by the later entrants.

oblio 2 days ago | parent | prev [-]

They work better with project context and access to tools, so yeah, the web interface is not their best foot forward.

That doesn't mean the agents are amazing, but they can be useful.

Akronymus 2 days ago | parent [-]

A simple "how do I access x in y framework in the intended way" shouldnt require any more context.

instead of telling me about z option it keeps hallucinating something that doesnt exist and even says its in the docs when it isnt.

Literally just wasting my time

oblio 2 days ago | parent [-]

I was in the same camp until a few months ago. I now think they're valid tools, like compilers. Not in the sense that everyone compares them (compilers made asm development a minuscule niche of development).

But in the sense that even today many people don't use compilers or static analysis tools. But that world is slowly shrinking.

Same for LLMs, the non LLM world will probably shrink.

You might be able to have a long and successful career without touching them for code development. Personally I'd rather check them out since tools are just tools.

_ikke_ 3 days ago | parent | prev [-]

As long as what it says is reliable and not made up.

qudat 2 days ago | parent | next [-]

That's true for internet searching. How many times have you gone to SO, seen a confident answer, tried it, and it failed to do what you needed?

Anamon 2 days ago | parent [-]

Then you write a comment, maybe even figure out the correct solution and fix the answer. If you're lucky, somebody already did. Everybody wins.

That's what LLMs take away. Nothing is given back to the community, nothing is added to shared knowledge, no differing opinions are exchanged. It just steals other people's work from a time when work was still shared and discussed, removes any indication of its source, claims it's a new thing, and gives you no way to contribute back, or even discuss it and maybe get confronted with different opinions of even discovering a better way.

Let's not forget that one of the main reasons why LLMs are useful for coding in the first place, is that they scraped SO from the time where people still used it.

anakaine 3 days ago | parent | prev [-]

I feel like we are just covering whataboutism tropes now.

You can absolutely learn from an LLM. Sometimes.documentation sucks and the LLM has learned how to put stuff together feom examples found in unusual places, and it works, and shows what the documentation failed to demonstrate.

And with the people above, I agree - sometimes the fun is in the end process, and sometimes it is just filling in the complexity we do not have time or capacity to grab. I for one just cannot keep up with front end development. Its an insurmountable nightmare of epic proportions. Im pretty skilled at my back end deep dive data and connecting APIs, however. So - AI to help put together a coherent interface over my connectors, and off we go for my side project. It doesnt need to be SOC2 compliant and OWASP proof, nor does it need ISO27001 compliance testing, because after all this is just for fun, for me.

hyperadvanced 3 days ago | parent | prev | next [-]

You can study the LLM output. In the “before times” I’d just clone a random git repo, use a template, or copy and paste stuff together to get the initial version working.

inferiorhuman 3 days ago | parent [-]

Studying gibberish doesn't teach you anything. If you were cargo culting shit before AI you weren't learning anything then either.

ben_w 3 days ago | parent | next [-]

Necessarily, LLM output that works isn't gibberish.

The code that LLM outputs, has worked well enough to learn from since the initial launch of ChatGPT. This even though back then you might have to repeatedly say "continue" because it would stop in the middle of writing a function.

inferiorhuman 2 days ago | parent [-]

  Necessarily, LLM output that works isn't gibberish.
Hardly. Poorly conjured up code can still work.
ben_w 2 days ago | parent [-]

"Gibberish" code is necessary code which doesn't work. Even in the broader use of the term: https://en.wikipedia.org/wiki/Gibberish

Especially in this context, if a mystery box solves a problem for me, I can look at the solution and learn something from that solution, c.f. how paper was inspired by watching wasps at work.

Even the abject failures can be interesting, though I find them more helpful for forcing my writing to be easier to understand.

oblio 2 days ago | parent | prev [-]

It's not gibberish. More than that, LLMs frequently write comments (some are fluff but some explain the reasoning quite well), variables are frequently named better than cdx, hgv, ti, stuff like that, plus looking at the reasoning while it's happening provides more clues.

Also, it's actually fun watching LLMs debug. Since they're reasonably similar to devs while investigating, but they have a data bank the size of the internet so they can pull hints that sometimes surprise even experienced devs.

I think hard earned knowledge coming from actual coding is still useful to stay sharp but it might turn out the balance is something like 25% handmade - 75% LLM made.

inferiorhuman 2 days ago | parent [-]

  they have a data bank the size of the internet so they can
  pull hints that sometimes surprise even experienced devs.
That's a polite way of phrasing "they've stolen a mountain of information and overwhelmed resources that humans would use to other find answers." I just discovered another victim: the Renesas forums. Cloudflare is blocking me from accessing the site completely, the only site I've ever had this happen to. But I'm glad you're able to have your fun.

  it might turn out the balance is something like 25% handmade - 75% LLM made.
Doubtful. As the arms race continues AI DDoS bots will have less and less recent "training" material. Not a day goes by that I don't discover another site employing anti-AI bot software.
ben_w 2 days ago | parent | next [-]

> they've stolen a mountain of information

In law, training is not itself theft. Pirating books for any reason including training is still a copyright violation, but the judges ruled specifically that the training on data lawfully obtained was not itself an offence.

Cloudfare has to block so many more bots now precisely because crawling the public, free-to-everyone, internet is legally not theft. (And indeed would struggle to be, given all search engines have for a long time been doing just that).

> As the arms race continues AI DDoS bots will have less and less recent "training" material

My experience as a human is that humans keep re-inventing the wheel, and if they instead re-read the solutions from even just 5 years earlier (or 10, or 15, or 20…) we'd have simpler code and tools that did all we wanted already.

For example, "making a UI" peaked sometime between the late 90s and mid 2010s with WYSIWYG tools like Visual Basic (and the mac equivalent now known as Xojo) and Dreamweaver, and then in the final part of that a few good years where Interface Builder finally wasn't sucking on Xcode. And then everyone on the web went for React and Apple made SwiftUI with a preview mode that kept crashing.

If LLMs had come before reactive UI, we'd have non-reactive alternatives that would probably suck less than all the weird things I keep seeing from reactive UIs.

Anamon 2 days ago | parent [-]

> Cloudfare has to block so many more bots now precisely because crawling the public, free-to-everyone, internet is legally not theft.

That is simply not true. Freely available on the web doesn't mean it's in the Public Domain. The "lawfully obtained" part of your argument is patently untrue. You can legally obtain something, but that doesn't mean any use of it is automatically legal as well. Otherwise, the recent Spotify dump by Anna's Archive would be legal as well.

It all depends on the license the thing is released under, chosen by the person who made it freely accessible on the web. This license is still very emphatically a legally binding document that restricts what someone can do with it.

For instance, since the advent of LLM crawling, I've added the "No Derivatives" clause to the CC license of anything new I publish to the web. It's still freely accessible, can be shared on, etc., but it explicitly prohibits using it for training ML models. I even add an additional clause to that effect, should the legal interpretation of CC-ND ever change. In short, anyone training an LLM on my content is infringing my rights, period.

ben_w 2 days ago | parent [-]

> Freely available on the web doesn't mean it's in the Public Domain.

Doesn't need to be.

> The "lawfully obtained" part of your argument is patently untrue. You can legally obtain something, but that doesn't mean any use of it is automatically legal as well.

I didn't say "any" use, I said this specific use. Here's the quote from the judge who decided this:

  5. OVERALL ANALYSIS.
  After the four factors and any others deemed relevant are “explored, [ ] the results [are] weighed together, in light of the purposes of copyright.” Campbell, 510 U.S. at 578. The copies used to train specific LLMs were justified as a fair use. Every factor but the nature of the copyrighted work favors this result. The technology at issue was among the most transformative many of us will see in our lifetimes.
- https://storage.courtlistener.com/recap/gov.uscourts.cand.43...

> Otherwise, the recent Spotify dump by Anna's Archive would be legal as well.

I specifically said copyright infringement was separate. Because, guess what, so did the judge the next paragraph but one from the quote I just gave you.

> For instance, since the advent of LLM crawling, I've added the "No Derivatives" clause to the CC license of anything new I publish to the web. It's still freely accessible, can be shared on, etc., but it explicitly prohibits using it for training ML models. I even add an additional clause to that effect, should the legal interpretation of CC-ND ever change. In short, anyone training an LLM on my content is infringing my rights, period.

It will be interesting to see if that holds up in future court cases. I wouldn't bank on it if I was you.

oblio 2 days ago | parent | prev [-]

> That's a polite way of phrasing "they've stolen a mountain of information and overwhelmed resources that humans would use to other find answers."

Yes, but I can't stop them, can you?

> But I'm glad you're able to have your fun.

Unfortunately I have to be practical.

> Doubtful. As the arms race continues AI DDoS bots will have less and less recent "training" material. Not a day goes by that I don't discover another site employing anti-AI bot software.

Almost all these BigCos are using their internal code bases as material for their own LLMs. They're also increasingly instructing their devs to code primarily using LLMs.

The hope that they'll run out of relevant material is slim.

Oh, and at this point it's less about the core/kernel/LLMs than it is about building ol' fashioned procedural tooling aka code around the LLM, so that it can just REPL like a human. Turns out a lot of regular coding and debugging is what a machine would do, READ-EVAL-PRINT.

I have no idea how far they're going to go, but the current iteration of Claude Code can generate average or better code, which is an improvement in many places.

inferiorhuman 2 days ago | parent [-]

  The hope that they'll run out of relevant material is slim.
If big corps are training their LLMs on their LLM written code…
oblio 2 days ago | parent [-]

You're almost there:

> If big corps are training their LLMs on their LLM written code <<and human reviewed code>>…

The last part is important.

inferiorhuman 18 hours ago | parent [-]

Until the humans are required to (or just plain want to) use LLMs to review the code.

dcre 2 days ago | parent | prev | next [-]

This is just not true. I have wasted many hours looking for answers to hard-to-phrase questions and learned very little from the process. If an LLM can get me the same result in 30 seconds, it's very hard for me to see that as a bad thing. It just means I can spend more time thinking about the thing I want to be thinking about. I think to some extent people are valorizing suffering itself.

codr7 2 days ago | parent [-]

Learning means friction, it's not going to happen any other way.

onemoresoop 2 days ago | parent | next [-]

Some of it is friction, some of it is play. With AI you can get faster to the play part where you do learn a fair bit. But in a sense I agree that less is retained. I think that is not because of lack of friciton, instead is the fast pace of getting what you want now. You no longer need to make a conscious effort to remember any of it because it's effortless to get it again with AI if you ever need it. If that's what you mean by friction then I agree.

dcre 2 days ago | parent | prev | next [-]

I agree! I just don’t think the friction has to come from tediously trawling through a bunch of web pages that don’t contain the answer to your question.

mountain_peak 2 days ago | parent | prev [-]

"What an LLM is to me is the most remarkable tool that we've ever come up with, and it's the equivalent of a e-bike for our minds"

codr7 a day ago | parent [-]

Which is about as useful as a bike for our airplanes.

CamperBob2 3 days ago | parent | prev | next [-]

I don't want to waste time learning how to install and configure ephemeral tools that will be obsolete before I ever need to use them again.

ggggffggggg 3 days ago | parent | next [-]

Exactly, the whole point is it wouldn’t take 30 minutes (more like 3 hours) if the tooling didn’t change all the fucking time. And if the ecosystem wasn’t a house of cards 8 layers of json configuration tall.

Instead you’d learn it, remember it, and it would be useful next time. But it’s not.

Akronymus 2 days ago | parent | prev [-]

And I don't want to use tools I don't understand at least to some degree. I always get nervous when I do something but don't know why I do that something

xnx 2 days ago | parent [-]

Depends on what level of abstraction you're comfortable with. I have no problem driving a car I didn't build.

Akronymus 2 days ago | parent [-]

I didnt build my car either. But I understand a bit of most of the main mechanics, like how the ABS works, how powered steering does, how an ICE works and so on.

qualifck 2 days ago | parent | prev | next [-]

Not necessarily. The end result of googling a problem might be copying a working piece of code off of stack exchange etc. without putting any work into understanding it.

Some people will try to vibe out everything with LLMs, but other people will use them to help engage with their coding more directly and better understand what's happening, not do worse.

3 days ago | parent | prev | next [-]
[deleted]
spankibalt 3 days ago | parent | prev | next [-]

I don't think "learning" is a goal here...

ragequittah a day ago | parent | prev | next [-]

Usually The thing you've learned after googling for half an hour is mostly that google isn't very useful for search anymore.

enraged_camel 3 days ago | parent | prev | next [-]

>> The difference is that after you’ve googled it for ½ hour, you’ve learned something.

I've been programming for 15+ years, and I think I've forgotten the overwhelming majority of the things I've googled. Hell, I can barely remember the things I've googled yesterday.

rob74 3 days ago | parent [-]

Additionally, in the good/bad old days of using StackOverflow, maybe 10% of the answers actually explained how that thing you wanted to do actually worked, the rest just dumped some code on you and left you to figure it out by yourself, or more likely just copy & paste it and be happy when it worked (if you were lucky)...

ajmurmann 2 days ago | parent | prev | next [-]

I don't think I'll learn anything by yet again implementing authentication, password reset, forgotten password, etc.

fleroviumna 2 days ago | parent | prev | next [-]

[dead]

visarga 3 days ago | parent | prev [-]

Why train to pedal fast when we already got motorcycles? You are preparing for yesterday's needs. There will never be a time when we need to solve this manually like it's 2019. Even in 2019 we would probably have used Google, solving was already based on extensive web resources. While in 1995 you would really have needed to do it manually.

Instead of manual coding training your time is better invested in learning to channel coding agents, how to test code to our satisfaction, how to know if what AI did was any good. That is what we need to train to do. Testing without manual review, because manual review is just vibes, while tests are hard. If we treat AI-generated code like human code that requires a line-by-line peer review, we are just walking the motorcycle.

How do we automate our human in the loop vibe reactions?

oblio 2 days ago | parent | next [-]

> Why train to pedal fast when we already got motorcycles? You are preparing for yesterday's needs.

This is funny in the sense that in properly built urban environment bycicles are one of the best ways to add some physical activity in a time constrained schedule, as we're discovering.

philipwhiuk 2 days ago | parent | prev | next [-]

> Instead of manual coding training your time is better invested in learning to channel coding agents

All channelling is broken when the model is updated. Being knowledgeable about the foibles of a particular model release is a waste of time.

> how to test code to our satisfaction

Sure testing has value.

> how to know if what AI did was any good

This is what code review is for.

> Testing without manual review, because manual review is just vibes

Calling manual review vibes is utterly ridiculous. It's not vibes to point out an O(n!) structure. It's not vibes to point out missing cases.

If your code reviews are 'vibes', you're bad at code review

> If we treat AI-generated code like human code that requires a line-by-line peer review, we are just walking the motorcycle.

To fix the analogy you're not reviewing the motorcycle, you're reviewing the motorcycle's behaviour during the lap.

visarga 2 days ago | parent [-]

> This is what code review is for.

My point is that visual inspection of code is just "vibe testing", and you can't reproduce it. Even you yourself, 6 months later, can't fully repeat the vibe check "LGTM" signal. That is why the proper form is a code test.

ben_w 2 days ago | parent | prev [-]

Yes and no.

Yes, I recon coding is dead.

No, that doesn't mean there's nothing to learn.

People like to make comparisons to calculators rendering mental arithmetic obsolete, so here's an anecdote: First year of university, I went to a local store and picked up three items each costing less than £1, the cashier rang up a total of more than £3 (I'd calculated the exact total and pre-prepared the change before reaching the head of the queue, but the exact price of 3 items isn't important enough to remember 20+ years later). The till itself was undoubtedly perfectly executing whatever maths it had been given, I assume the cashier mistyped or double-scanned. As I said, I had the exact total, the fact that I had to explain "three items costing less than £1 each cannot add up to more than £3" to the cashier shows that even this trivial level of mental arithmetic is not universal.

I now code with LLMs. They are so much faster than doing it by hand. But if I didn't already have experience of code review, I'd be limited to vibe-coding (by the original definition, not even checking). I've experimented with that to see what the result is, and the result is technical debt building up. I know what to do about that because of my experience with it in the past, and I can guide the LLM through that process, but if I didn't have that experience, the LLM would pile up more and more technical debt and grind the metaphorical motorbike's metaphorical wheels into the metaphorical mud.

visarga 2 days ago | parent [-]

> But if I didn't already have experience of code review, I'd be limited to vibe-coding (by the original definition, not even checking).

Code review done visually is "just vibe testing" in my book. It is not something you can reproduce, it depends on the context in your head this moment. So we need actual code tests. Relying on "Looks Good To Me" is hand waving, code smell level testing.

We are discussing vibe coding but the problem is actually vibe testing. You don't even need to be in the AI age to vibe test, it's how we always did it when manually reviewing code. And in this age it means "walking your motorcycle" speed, we need to automate this by more extensive code tests.

ben_w 2 days ago | parent [-]

I agree that actual tests are also necessary, that code review is not enough by itself. As LLMs can also write tests, I think getting as close as is sane to 100% code coverage is almost the first thing people should be doing with LLM assistance (and also, "as close as is sane": make sure that it really is a question of "I thought carefully and have good reason why there's no point testing this" rather than "I'm done writing test code, I'm sure it's fine to not test this", because LLMs are just that cheap).

However, code review can spot things like "this is O(n^2) when it could be O(n•log(n))", or "you're doing a server round trip for each item instead of parallelising them" etc.

You can also ask an LLM for a code review. They're fast and cheap, and whatever the LLM catches is something you get without having to waste a coworker's time. But LLMs have blind spots, and more importantly all LLMs (being trained on roughly the same stuff in roughly the same way) have roughly the same blind spots, whereas human blind spots are less correlated and expand coverage.

And code smells are still relevant for LLMs. You do want to make sure they're e.g. using a centralised UI style system and not copy-pasting style into each widget, because duplication wastes tokens and is harder to correctly update with LLMs for much the same reason it is with humans: stuff gets missed during the process when it's copypasta.

visarga 2 days ago | parent [-]

I am personally working on formalizing the design stage as well, the core concepts being Architecture, Goal, Solution and Implementation. That would make something like the complexity of an algorithm an explicit decision in a graph. It would make constraints and dependencies explicitly formalized. You can track any code to its solution (design stage) and goals, account for everything top-down and bottom-up, and assign tests for all nodes.

Take a look here: https://github.com/horiacristescu/archlib/blob/main/examples... (but it's still WIP, I am not there yet)

3 days ago | parent | prev | next [-]
[deleted]
jimbokun 3 days ago | parent | prev [-]

The difference is whether or not you find computers interesting and enjoy understanding how they work.

For the people who just want to solve some problem unrelated to computers but require a computer for some part of the task, yes AI would be more “fun”.

phil21 3 days ago | parent | next [-]

I don’t find this to be true. I enjoy computers quite a bit. I enjoy the hardware, scaling problems, theory behind things, operating systems, networking, etc.

Most of all I find what computers allow humanity to achieve extremely interesting and motivating. I call them the worlds most complicated robot.

I don’t find coding overly fun in itself. What I find fun is the results I get when I program something that has the result I desire. Maybe that’s creating a service for friends to use, maybe it’s a personal IT project, maybe it’s having commercial quality WiFi at home everyone is amazed at when they visit, etc. Sometimes - even often - it’s the understanding that leads to pride in craftsmanship.

But programming itself is just a chore for me to get done in service of whatever final outcome I’m attempting to achieve. Could be delivering bits on the internet for work, or automating OS installs to look at the 50 racks of servers humming away with cable porn level work done in the cabinets.

I never enjoyed messing around with HTML at that much in the 90s. But I was motivated to learn it just enough to achieve the cool ideas I could come up with as a teenager and share them with my friends.

I can appreciate clean maintainable code, which is the only real reason LLMs don’t scratch the itch as much as you’d expect for someone like me.

tjr 3 days ago | parent | next [-]

What I really enjoy in programming is algorithms and bit-twiddling and stuff that might be in Knuth or HAKMEM or whatever. That’s fun. I like writing Lisp especially, and doing cool, elegant functional programs.

I don’t enjoy boilerplate. I don’t necessarily enjoy all of the error checking and polishing and minutia in turning algorithms into shippable products.

I find AI can be immensely helpful in making real things for people to use, but I still enjoy doing what I find fun by hand.

girvo 3 days ago | parent | prev [-]

See, I do though. I enjoy the act, the craft of programming. It's intrinsically fun for me, and has been for the 25 years I've been doing it at this point, and it still hasn't stopped being fun!

Different strokes I guess

phil21 2 days ago | parent [-]

Oh I totally agree! I have a lot of fun chatting with friends/coworkers who are super into programming as an art and/or passion.

I just was pushing back on the “you aren’t into computers if you don’t get intrinsic joy out of programming itself” bit.

ben_w 2 days ago | parent | prev [-]

> The difference is whether or not you find computers interesting and enjoy understanding how they work.

I'm a stereotypical nerd, into learning for its own sake.

I can explain computers from the quantum mechanics of band gaps in semiconductors up to fudging objects into C and the basics of operating systems with pre-emptive multitasking, virtual memory, and copy-on-write as they were c. 2004.

Further up the stack it gets fuzzy (not that even these foundations are not; "basics" of OSes, I couldn't write one); e.g. SwiftUI is basically a magic box, and I find it a pain to work with as a result.

LLM output is easier to understand than SwiftUI, even if the LLM itself has much weirder things going on inside.

jiveturkey 2 days ago | parent [-]

So, can you tell me everything that happens after you type www.google.com<RET> into the browser? ;)

ben_w 2 days ago | parent | next [-]

Nope, but that was the example I had in mind when I chose my phrasing :)

I think I can describe the principles at work with DNS, but not all of how IP packets are actually routed; the physics of beamforming and QAM, but none of the protocol of WiFi; the basics of error correction codes, but only the basics and they're probably out of date; the basic ideas used in private key crypto but not all of HTTPS; I'd have to look up the OSI 7-layer model to remember all the layers; I understand older UI systems (I've even written some from scratch), but I'm unsure how much of current web browsers are using system widgets vs. it all being styled HTML; interrupts as they used to be, but not necessarily as they still are; my knowledge of JavaScript is basic; and my total knowledge of how certificate signing works is the conceptual level of it being an application of public-private key cryptography.

I have e.g. absolutely no idea why Chrome is famously a memory hog, and I've never learned how anything is scheduled between cores at the OS level.

jimbokun 2 days ago | parent | prev [-]

Curious if anyone has turned answering this question into an entire book, because it could be a great read.

arjie 3 days ago | parent | prev | next [-]

I think a lot of us just discovered that the actual programming isn't the fun part for us. It turns out I don't like writing code as much as I thought. I like solving my problems. The activation energy for a lot of things was much higher than it is now. Now it's pretty low. That's great for me. Baby's sleeping, 3d printer is rolling, and I get to make a little bit of progress on something super quick. It's fantastic.

blitz_skull 3 days ago | parent | next [-]

This 1000x!

I had a bit of an identity crisis with AI first landed and started producing good code. “If I’m not the man who can type quickly, accurately, and build working programs… WHO AM I?”

But as you pointed out, I quickly realized I was never that guy. I was the guy who made problems go away, usually with code.

Now I can make so many problems go away, it feels like cheating. As it turns out, writing code isn’t super useful. It’s the application of the code, the judgement of which problems to solve and how to solve them, that truly mattered.

And that sparks a LOT of joy.

spankibalt 3 days ago | parent [-]

[flagged]

ragequittah 3 days ago | parent | next [-]

I imagine this same argument happening when people stopped using machine code and assembly en masse and started using FORTRAN or COBOL. You don't really know what you're doing unless you're spending the effort I spent!

spankibalt 3 days ago | parent [-]

> "I imagine this same argument happening when people stopped using machine code and assembly en masse and started using FORTRAN or COBOL."

Yeah, certainly. But since this has nothing to do with my argument, which was an answer to the very existential question of a (postulated) non-coder, and not a comment on a forgotten pissing contest between coders, it's utterly irrelevant.

:(

framapotari 3 days ago | parent [-]

This is quite funny when you created the pissing contest between "coders" and "non-coders" in this thread. Those labels seem very important to you.

spankibalt 3 days ago | parent [-]

I didn't "create" the pissing contest, I merely pointed it out in someone else's drivel.

And of course, these labels are important to me for (precise) language defines the boundaries of my world; coder vs. non-coder, medico vs. quack, writer vs. analphabet, truth vs. lie, etc. Elementary.

cthalupa 2 days ago | parent [-]

I find it quite interesting that you categorize non-coders the same as quacks, analphabets, and lies.

I would never consider myself a coder - though I can and have written quite a lot of code over the years - because it has always been a means to the ends for me. I don't particularly enjoy writing code. Programming isn't a passion. I can and have built working programs without a line of copy and pasted code off stack overflow or using an LLM. Because I needed to to solve a problem.

But there are things I would call myself, things I do and enjoy and am good at. But I wouldn't position people who can't do those things as being the same as a quack.

You also claim to not be the one that started the pissing contest, but you called someone who claims to have written plenty of code themselves a coding-illiterate just because now they'd rather use an LLM than do it themselves. I suppose you could claim they are lying about it, or some no true scottsman type argument, but that seems silly.

You basically took some people talking about their own opinions on what they find enjoyable, and saying that AI-driven coding scratches that itch for them even more than writing code itself does, and then began to be quite hostile towards them with boatloads of denigrating language and derision.

spankibalt 2 days ago | parent [-]

> "I find it quite interesting that you categorize non-coders the same as quacks, analphabets, and lies."

I categorized them not as "the same", but as examples of concept-delineating polar opposites. This as answer to somebody who essentially trotted out the "but they're just labels!1!!" line, which was already considered intellectually lazy before it was turned into a sad meme by people who married their bongs back in the 90s.

> "I would never consider myself a coder - though I can and have written quite a lot of code over the years [...]"

Good for you. A coder, to me, is simply somebody who can produce working programs on their own and has the neccessary occupational (self-) respect. This fans out into several degrees of capabilities, of course.

> "[...] but you called someone who claims to have written plenty of code themselves a coding-illiterate just because now they'd rather use an LLM than do it themselves. "

No. I simply answered this one question:

> “If I’m not the man who can [...] build working programs… WHO AM I?”

Aside from that I reflected on an insulting(ly daft) but extremely common attitude amongst sloperators, especially on parasocial media platforms:

> "As it turns out, writing code isn’t super useful."

Imagine I go to some other SIG to say shit like this: As it turns out, [reading and writing words/playing or operating an instrument or tool/drawing/calculating/...] isn’t "super useful". Suckers!

I'd expect to get properly mocked and then banned.

> "You basically took some people talking about their own opinions on what they find enjoyable, [...]"

Congratulations, you're just the next strawmen salesman. For the last time, bambini: I don't care if this guy uses LLMs and enjoys it... for that was never the focus of my argument at all.

jtbayly 3 days ago | parent | prev | next [-]

You definitely completely misconstrued what was said and meant.

It appears you have yet to grapple with the question asked. And I suspect you would be helped by doing so. Let me restate the question for you:

If actually writing code can be done without you or any coworker now, by AI, what is your purpose?

3 days ago | parent [-]
[deleted]
ch4s3 3 days ago | parent | prev | next [-]

Anyone who can’t read Proust and write a compelling essay about the themes is illiterate!

spankibalt 3 days ago | parent [-]

One day you actually might discover there's different levels of literacy. Like there's something between 0 and 255!

Here's a pointer: Not being able to read (terminus technicus: analphabet) makes you a non-reader, just as not being able to cobble together a working proggie on your own merits makes you a non-coder. Man alive...

ch4s3 3 days ago | parent [-]

That’s quite literally my point.

spankibalt 3 days ago | parent [-]

[flagged]

ch4s3 2 days ago | parent [-]

what do you think I meant?

jimbokun 3 days ago | parent | prev [-]

It’s possible to be someone who’s very good at writing quality programs but still enjoy delegating as much of that as possible to AI to focus on other things.

spankibalt 3 days ago | parent [-]

> "It’s possible to be someone who’s very good at writing quality programs but still enjoy delegating as much of that as possible to AI to focus on other things."

That's true, Jimbo. And besides the point, because:

1. It wasn't about someone who's very good at writing quality programs, but someone who perceives themselves as someone who "is not the man who can build working programs". Do you comprehend the difference?

2. The enjoyment of using slopware wasn't part of the argument (see my answer to the question). That's not something I remotely care about. For the question my answer referred to, please see the cited text before the question mark. <3

3. People who define the very solution to the problem as "isn't super useful" do at least two things:

They misunderstood, or misunderstand, their capabilities in problem solving/solutions, and most likely (have) delude(d) themselves, and...

They look down on people who actually have done, do, and will do the legwork to solve these very problems ("Your work isn't super useful"). Back in the day we called 'em lamers and/or posers.

I hope that clears things up.

cthalupa 2 days ago | parent [-]

> 1. It wasn't about someone who's very good at writing quality programs, but someone who perceives themselves as someone who "is not the man who can build working programs". Do you comprehend the difference?

For someone who has taken heavy enjoyment in likening people to analphabets you seem to have entirely misunderstood (or if you understood, heavily misconstrued) the initial point of the person you are responding to.

The entire point is that their identity WAS someone who is the man who can build those programs, and now AI was threatening to do the same thing.

Unless you a presupposing that anyone who can be happy with the output of LLMs for writing code simply is impossible of having the ability to write quality code themselves. Which would be silly.

jtbayly 3 days ago | parent | prev | next [-]

Exactly. And I was never particularly good at coding, either. Pairings with Gemini to finally figure out how to decompile an old Java app so I can make little changes to my user profile and some action files? That was fun! And I was never going to be able to figure out how to do it on my own. I had tried!

jimbokun 3 days ago | parent [-]

Fewer things sound less interesting to me than that.

jtbayly 3 days ago | parent | next [-]

Fair enough. But that particular could be anything that has been bothering you but you didn’t have the time or expertise to fix yourself.

I wanted that fixed, and I had given up on ever seeing it fixed. Suddenly, in only two hours, I had it fixed. And I learned a lot in the process, too!

cmwelsh 3 days ago | parent | prev [-]

> Fewer things sound less interesting to me than that.

To each their own! I think the market for folks who understand their own problems is exploding! It’s free money.

popalchemist 3 days ago | parent | prev | next [-]

Literally shipping a vide-coded feature as my baby sleeps, while reading this comment thread. It's the wild west again. I love it.

codr7 2 days ago | parent [-]

Maybe you can tell us the name of the software so we can avoid it?

mrkramer 2 days ago | parent | next [-]

Google, Facebook, Amazon, Microsoft....they literally all have the vibe coded code; it's not about vibe coded or not, it is about how well the code is designed, efficient and bug free. Ofc pro coders can debug it and fix it better than some amateur coder but still LLMs are so valuable. I let Gemini vibe code little web projects for me and it serves me well. Although you have to explain everything step by step to it and sometimes when it fixes one bug, it accidently introduces another. But we fix bugs together and learn together. And btw when Gemini fixes bugs, it puts comments in the code on how the particular bug was fixed.

duskdozer 13 hours ago | parent [-]

Are those supposed to be companies we don't want to avoid?

popalchemist 2 days ago | parent | prev [-]

It's a personal project. No need to be a dick.

codr7 a day ago | parent [-]

Presenting AI slop as software is about as as big as it gets.

RicoElectrico 3 days ago | parent | prev [-]

This. Busy-beavering is why the desktop Linux is where it is - rewriting stuff, making it "elegant" while breaking backwards compatibility - instead of focusing on the outcome.

int_19h 3 days ago | parent [-]

macOS breaks backwards compatibility all the time, and yet...

sokoloff 2 days ago | parent [-]

Other than security-related changes, as a user, I find macOS to be quite generous about its evolution, supporting deprecated APIs for many years, etc.

SIP and the transition to a read-only system volume are the only two things that I remember broke things that I noticed.

It’s not Windows-level of backwards compatibility, but it’s quite good overall from the user side.

freedomben 3 days ago | parent | prev | next [-]

It's just fun in a different way now. I've long had dozens of ideas for things I wanted to build, and never enough time to really even build one of them. Over the last few months, I've been able to crank out several of these projects to satisfactory results. The code is not a beautiful work of art like I would prefer it to be, and the fun part is no longer the actual code and working in the code base like it used to be. The fun part now is being able to have an app or tool that gets the job I needed done. These are rarely important jobs, just things that I want as a personal user. Some of them have been good enough that I shipped them for other users, but the vast majority are just things I use personally.

Just yesterday for example, I used AI to build a GTK app that has a bunch of sports team related sound effects built into them. I could have coded this by hand in 45 minutes, but it only took 10 minutes with AI. That's not the best part though. The best part is that I was able to use AI to get it building into an app image in a container so I can distribute it to myself as a single static file that I can execute on any system I want. Dicking with builds and distribution was always the painful part and something that I never enjoyed, but without it, usage is a pain. I've even gone back to projects I built a decade ago or more and got them building against modern libraries and distributed as RPMs or app images that I can trivially install on all of my systems.

The joy is now in the results rather than the process, but it is joy nonetheless.

iamflimflam1 3 days ago | parent | next [-]

I think, for a lot of people, solving the problem was always the fun part.

There is immense pleasure in a nice piece of code - something that is elegant, clever and simple at the same time.

Grinding out code to get something finished - less fun…

TuringTest 3 days ago | parent [-]

It depends. Sometimes they joy is in discovering what problem you are solving, by exploring the space of possibilities on features and workflows on a domain.

For that, having elegant and simple software is not needed; getting features fast to try out how they work is the basis of the pleasure, so having to write every detail by hand reduces the fun.

jimbokun 3 days ago | parent [-]

Sounds like someone who enjoys listening to music but not composing or performing music.

dpkirchner 3 days ago | parent | next [-]

Or maybe someone DJing instead of creating music from scratch.

TuringTest 2 days ago | parent | prev [-]

Or someone who enjoys playing music but not building their own instrument from scratch.

jimbokun 2 days ago | parent [-]

No.

Building the instrument would be electrical engineering.

Playing the instrument would be writing software.

apitman 3 days ago | parent | prev [-]

I use LLMs for code at work, but I've been a bit hesitant to dive in for side projects because I'm worried about the cost.

Is it necessary to pay $200/mo to actually ship things or will $20/mo do it? Obviously I could just try it myself and see how far I get bit I'm curious to hear from someone a bit further down the path.

vineyardmike 3 days ago | parent | next [-]

The $20/mo subscription (Claude Code) that I've been using for my side projects has been more than enough for me 90% of the time. I mostly use the cheaper models lately (Haiku) and accept that it'll need a bit more intervention, but it's for personal stuff and fun so that's ok. If you use VSCode, Antigravity or another IDE that's trying to market their LLM integration, then you'll also get a tiny allowance of additional tokens through them.

I'll use it for a few hours at a time, a couple days a week, often while watching TV or whatever. I do side projects more on long rainy weekends, and maybe not even every week during the summer. I'll hit the limit if I'm stuck inside on a boring Sunday and have an idea in my head I really wanted to try out and not stop until I'm done, but usually I never hit the limit. I don't think I've hit the limit since I switched my default to Haiku FWIW.

The stat's say I've generated 182,661 output tokens in the last month (across 16 days), and total usage if via API would cost $39.67.

naught0 a day ago | parent | prev | next [-]

You can use Gemini for free. Or enable the API and pay a few bucks for variable usage every month. Could be cents if you don't use it much like me

indigodaddy 3 days ago | parent | prev | next [-]

Check out the Google One AI Pro plan ($20/mo) in combination with Antigravity (Google's VS Code thingy) which has access to Opus 4.5. this combo (AG/AI Pro plan/Opus 5.5) is all the rage on Reddit with users reporting incredibly generous limits (which most users say they never meet even with high usage) that resets every 5 hours.

ben_w 2 days ago | parent | prev | next [-]

$20 is fine. I used a free trial before Christmas, and my experience was essentially that my code review speed would've prevented me doing more than twice that anyway… and that's without a full time job, so if I was working full time, I'd only have enough free time to review $20/month of Claude's output.

You can vibe code, i.e. no code review, but this builds up technical debt. Think of it as a junior who is doing one sprint's worth of work every 24 hours of wall-clock time when considering how much debt and how fast it will build up.

freedomben 3 days ago | parent | prev | next [-]

Depending on how much you use, you can pay API prices and get pretty far for 20 bucks a month or less. If you exhaust that, surprisingly, I recommend getting Gemini with the Google AI pro subscription. You can use a lot of the Gemini CLi for that

ACow_Adonis 2 days ago | parent | prev | next [-]

In practice, I find it depends on your work scale, topic and cadence.

I started on the $20 plans for a bit of an experiment, needing to see about this whole AI thing. And for the first month or two that was enough to get the flavor. It let me see how to work. I was still copy/pasting mostly, thinking about what to do.

As i got more confident i moved to the agents and the integrated editors. Then i realised i could open more than one editor or agent at a time while each AI instance was doing its work.

I discovered that when I'm getting the AI agents to summarise, write reports, investigate issues, make plans, implement changes, run builds, organise git, etc, now I can alt-tab and drive anywhere between 2-6 projects at once, and I don't have to do any of the boring boiler plate or administrivia, because the AI does that, it's what its great for.

What used to be unthinkable and annoying context switching now lets me focus in on different parts of the project that actually matter, firing off instructions, providing instructions to the next agent, ushering them out the door and then checking on the next intern in the queue. Give them feedback on their work, usher them on, next intern. The main task now is kind of managing the scope and context-window of each AI, and how to structure big projects to take advantage of that. Honestly though, i don't view it as too much more than functional decomposition. You've still got a big problem, now how do you break it down.

At this rate I can sustain the $100 claude plan, but honestly I don't need to go further than that, and that's basically me working full time in parallel streams, although i might be using it at relatively cheap times, so it or the $200 plan seems about right for full time work.

I can see how theoretically you could go even above that, going into full auto-pilot mode, but I feel i'm already at a place of diminishing marginal returns, i don't usually go over the $100 claude code plan, and the AIs can't do the complex work reliably enough to be left alone anyway. So at the moment if you're going full time i feel they're the sweet spot.

The $20 plans are fine for getting a flavor for the first month or two, but once you come up to speed you'll breeze past their limitations quickly.

camel_Snake 3 days ago | parent | prev | next [-]

I have a feeling you are using SOTA models at work and aren't used to just how cheap the non-Anthropic/Google/OAI options are these days. GLM's coding subscription is like $6/month if you buy a full year.

Marha01 3 days ago | parent | prev [-]

You can use AI code editor that allows you to use your own API key, so you pay per-token, not a fixed monthly fee. For example Cline or Roo Code.

int_19h 3 days ago | parent [-]

They all let you do that now, including Claude Code itself. You can choose between pay per token and subscription.

Which means that a sensible way to go about those things is to start with a $20 subscription to get access to the best models, and then look at your extra per-token expenses and whether they justify that $200 monthly.

xav_authentique 3 days ago | parent | prev | next [-]

I think this is showing the difference between people who like to /make/ things and those that like to make /things/. People that write software because they see a solution for a problem that can be fixed with software seem to benefit the most of LLM technology. It's almost the inverse for the people that write software because they like the process of writing software.

Defletter 3 days ago | parent | next [-]

Surely there has to be some level of "getting stuff done"/"achieving a goal" when /making/ things, otherwise you'd be foregoing for-loops because writing each iteration manually is more fun.

recursive 3 days ago | parent | next [-]

I think you misunderstand the perspective of someone who likes writing code. It's not the pressing of keys on the keyboard. It's figuring out which keys to press. Setting aside for the moment that most loops have a dynamic iteration count, typing out the second loop body is not fun if it's the same as the first.

I do code golf for fun. My favorite kind of code to write is code I'll never have to support. LLMs are not sparking joy. I wish I was old enough to retire.

jesse__ 3 days ago | parent | prev | next [-]

I have a 10-year-old side project that I've dumped tens of thousands of hours into. "Ship the game" was an explicit non-goal of the project for the vast majority of that time.

Sometimes, the journey is the destination.

pests 3 days ago | parent [-]

And sometimes the destination is the destination and the journey is a slog.

jesse__ 3 days ago | parent [-]

I mean, sure. I was just pointing out to the commentor that sometimes "getting stuff done" isn't the point.

xav_authentique 3 days ago | parent | prev [-]

Sure, but, in the real world, for the software to deliver a solution, it doesn't really matter if something is modelled in beautiful objects and concise packages, or if it's written in one big method. So for those that are more on the making /things/ side of the spectrum, I guess they wouldn't care if the LLM outputs code that has each iteration written separately.

It's just that if you really like to work on your craftsmanship, you spend most of the time rewriting/remodelling because that's where the fun is if you're more on the /making/ things side of the spectrum, and LLMs don't really assist in that part (yet?). Maybe LLMs could be used to discuss ways to model a problem space?

antonvs 3 days ago | parent | prev [-]

I like both the process and the product, and I like using LLMs.

You can use LLMs in whatever way works for you. Objections like the ones in this thread seem to assume that the LLM determines the process, but that’s not true at present.

Perhaps they’re worrying about what might happen in future, but more likely they’re just resisting change in the usual way of inventing objections against something they haven’t seriously tried. These objections serve more as emotional justifications to avoid changing, than rational positions.

hxtk 3 days ago | parent | prev | next [-]

As I've gotten more experience I've tended to find more fun in tinkering with architectures than tinkering with code. I'm currently working on making a secure zero-trust bare metal kubernetes deployment that relies on an immutable UKI and TPM remote attestation. I'm making heavy use of LLMs for the different implementation details as I experiment with the architecture. As far as I know, to the extent I'm doing anything novel, it's because it's not a reasonable approach for engineering reasons even if it technically works, but I'm learning a lot about how TPMs work and the boot process and the kernel.

I still enjoy writing code as well, but I see them as separate hobbies. LLMs can take my hand-optimized assembly drag racing or the joy of writing a well-crafted library from my cold dead hands, but that's not always what I'm trying to do and I'll gladly have an LLM write my OCI layout directory to CPIO helper or my Bazel rule for putting together a configuration file and building the kernel so that I can spend my time thinking about how the big pieces fit together and how I want to handle trust roots and cold starts.

MrDarcy 3 days ago | parent [-]

So much this. The act of having the agent create a research report first, a detailed plan second, then maybe implement it is itself fun and enjoyable. The implementation is the tedious part these days, the pie in the sky research and planning is the fun part and the agent is a font of knowledge especially when it comes to integrating 3 or 4 languages together.

hxtk 2 days ago | parent [-]

This goes further into LLM usage than I prefer to go. I learn so much better when I do the research and make the plan myself that I wouldn’t let an LLM do that part even if I trusted the LLM to do a good job.

I basically don’t outsource stuff to an LLM unless I know roughly what to expect the LLM output to look like and I’m just saving myself a bunch of typing.

“Could you make me a Go module with an API similar to archive/tar.Writer that produces a CPIO archive in the newcx format?” was an example from this project.

lmorchard 2 days ago | parent [-]

Yeah, this is a lot of what I'm doing with LLM code generation these days: I've been there, I've done that, I vaguely know what the right code would look like when I see it. Rather than spend 30-60 minutes refreshing myself to swap the context back into my head, I prompt Claude to generate a thing that I know can be done.

Much of the time, it generates basically what I would have written, but faster. Sometimes, better, because it has no concept of boredom or impatience while it produces exhaustive tests or fixes style problems. I review, test, demand refinements, and tweak a few things myself. By the end, I have a working thing and I've gotten a refresher on things anyway.

esperent 3 days ago | parent | prev | next [-]

Something happened to me a few years ago. I used to write code professionally and contribute to open source a lot. I was freelancing on other people's projects and contributing to mature projects so I was doing hard work, mostly at a low level (I mean algorithms, performance fixes, small new features, rather than high level project architecture).

I was working on an open source contribution for a few days. Something that I struggled with, but I enjoyed the challenge and learned a lot from it.

As it happened someone else submitted a PR fixing the same issue around the same time. I wasn't bothered if mine got picked or not, it happens. But I remember looking at how similar both of our contributions were and feeling like we were using our brains as computers, just crunching algorithms and pumping in knowledge to create some technical code that was (at the time) impossible for a computer to create. This stayed with me for a while and I decided that doing this technical algorithm crunching wasn't the best use of my human brain. I was making myself interchangeable with all the other human (and now AI) code crunchers. I should move on to a higher level, either architectural or management.

This was a big deal for me because I did love (and still do) deeply understanding algorithms and mathematics.

I was extremely fortunate with timing as it was just around one year before AI coding became mainstream but early enough that it wasn't a factor in this shift. Now an AI could probably churn out a decent version of that algorithm in a few minutes.

I did move on to open my own business with my partner and haven't written much code in a few years. And when I do now I appreciate that I can focus on the high level stuff and create something that my business needs in a few hours without exhausting myself on low level algorithm crunching.

This isn't meant to put down the enjoyment of writing code for code's sake. I still do appreciate well written code and the craft that goes into it. I'm just documenting my personal shift and noting that enjoyment can be found on both sides.

wincy 3 days ago | parent | prev | next [-]

I’ve got kids and so seldom find myself with the time or energy to work on something. Cursor has really helped in that regard.

I have an extensive media collection of very large VR video files with very unhelpful names. I needed to figure out a good way to review which ones I wanted to keep and discard (over 30TB, almost 2000 files). It was fun sitting using Cursor with Claude to work on setting up a quick web UI, with calls out to ffmpeg to generate snapshots. It handled the “boring parts” with aplomb, getting me a html page with a little JavaScript to serve as my front end, and making a super simple API. All this was still like 1000 lines and would have taken me days, or I would have copied some boilerplate then modified it a little.

The problems Claude couldn’t figure out were also similarly interesting, like its syntax to the ffmpeg calls were wrong and not skipping all the frames we didn’t want to generate, so it was taking 100x longer to generate than was necessary seeking through every file, then I made some optimizations in how I had it configured, then realizing I’d generated thumbnails for 3 hours only for them to not display well on the page as it was an 8x1 tile.

At that point Claude wanted to regenerate all the thumbnails and I said “just display the image twice, with the first half displayed the first time and the second half displayed the second time, saving myself a few hours. Hacky, but for a personal project, the right solution.

I still felt like I was tinkering in a way I haven’t in awhile, and a project that I’d never have gotten around to and instead have just probably bought another new hard drive, took me a couple hours, most of which was actually marking the files as keep or delete. I ended up deleting 12TB of stuff I didn’t want, which it felt cool to write myself a bespoke tool rather than search around on the off chance that such a thing already exists.

It also gave me a mental framework of how to approach little products like this in the future, that often a web ui and a simple API backend like Node making external process calls is going to be easier than making a full fat windows UI.

I have a similarly sized STL library from 3D printing and think I could apply mostly the same idea to that, in fact it’s 99% the same except for swapping out the ffmpeg call to something to generate a snapshot of the stl at a few different angles.

cco 3 days ago | parent | prev | next [-]

There are many people who enjoy spending an afternoon working on a classic car. There are also many people who enjoy spending an afternoon driving a classic car.

Sometimes there are people who enjoy both. Sometimes there are people that really like driving but not the tinkering and some who are the opposite.

osullivj 3 days ago | parent [-]

Neat summary of Zen and the Art of Motorcycle Riding!

Defletter 3 days ago | parent | prev | next [-]

I yearn for the mindset where I actively choose to accomplish comparatively little in the brief spells I have to myself, and remain motivated. Part of what makes programming fun for me is actually achieving something. Which is not to say you have to use AI to be productive, or that you aren't achieving anything, but this is not the antithesis of what makes programming fun, only what makes it fun for you.

6r17 3 days ago | parent | prev | next [-]

Ultimately it's up to the user to decide what to do with his time ; it's still a good bargain that leaves a lot of sovereignty to the user. I like to code a little too much ; got into deep tech to capacities I couldn't imagine before - but at some point you hit rock bottom and you gotta ship something that makes sense. I'm like a really technical "predator" - in a sense where to be honest with myself - it has almost become some way of consumption rather than pure problem solving. For very passionate people it can be difficult to be draw the line between pleasure and work - especially given that we just do what we like in the first place - so all that time feel robbed from us - and from the standpoint of "shipper" who didn't care about it in the first place it feels like freedom.

But I'd argue that if anyone wants to jump into technical stuff ; it has never been so openly accessible - you could join some niche slack where some competent programmers were doing great stuff. Today a solo junior can ship you a key-val that is going to be fighting redis in benchmarks.

It really is not a time to slack down in my opinion - everything feels already existing and mostly already dealt with. But again - for those who are frustrated with the status-quo ; they will always find something to do.

I get you however that this has created a very different space where past acquired skill-sets don't necessarily translate as well today - maybe it's just going to be different to find it's space than it was 10 years ago.

I like that the cards have be re-dealt though - it's arguably way more open than the stack-overflow era and pre-ai where knowledge was much more difficult to create.

plagiarist 3 days ago | parent | prev | next [-]

I do have productivity goals! I want to spend the half hour I have on the part I think is fun. Not on machine configuration, boilerplate, dependency resolution, 100 random errors with new frameworks that are maybe resolved with web searches.

simonw 3 days ago | parent | prev | next [-]

If you only get one or two half-hours a week it's probably more fun to use those to build working software than it is to inch forward on a project that won't do anything interesting for several more months.

ch4s3 3 days ago | parent | prev | next [-]

For me it automates a lot of the boilerplate that usually bogs me down on side projects. I cal spin up all of the stuff I hate doing quickly and then fiddle with the interesting parts inside of a working scaffold of code. I recently did this with an elixir wrapper around some Erlang OTP code o wanted to use. Figuring out how to clue together all of the parts that touched the Erlang and tracing all of the arguments through old OTP code would have absolutely stopped me from bothering with this in the past. Instead I’m having fun playing with the interface of my tool in ways that matter for my use case.

ashtonshears 3 days ago | parent | prev | next [-]

I enjoy coding for the ability to turn ideas into software. Seeing more rapid feature development, and also more rapid code cleanup and project architecture cleanup is what makes AI assisted coding enjoyable to me

yieldcrv 3 days ago | parent | prev | next [-]

Look, yeah one shotting stuff makes generic UIs, impressive feat but generic

its getting years of sideprojects off the ground for me

now in languages I never learned or got professional validation for: rust, lua for roblox … in 2 parallel terminal windows and Claude Code instances

all while I get to push frontend development further and more meticulously in a 3rd. UX heavy design with SVG animations? I can do that now, thats fun for me

I can make experiences that I would never spend a business Quarter on, I can rapidly iterate in designs in a way I would never pay a Fiverr contractor or three for

for me the main skill is knowing what I want, and its entirely questionable about whether that’s a moat at all but for now it is because all those “no code” seeking product managers and ideas guys are just enamored that they can make a generic something compile

I know when to point out the AI contradicted itself in a code concept, when to interrupt when its about to go off the rails

So far so great and my backend deployment proficiency has gone from CRUD-app only to replicating, understanding and superpassing what the veteran backend devs on my teams could do

I would previously call myself full stack, but knowing where my limits in understanding are

lowbloodsugar 3 days ago | parent | prev | next [-]

I enjoy noodling around with pointers and unsafe code in Rust. Claude wrote all the documentation, to Rust standards, with nice examples for every method.

I decided to write an app in Rust with a React UI, and Claude wrote almost all the typescript for me.

So I’ve used Claude at both ends of the spectrum. I had way more fun in every situation.

AI is, fortunately, very bad at the things I find fun, at least for now, and very good at the things I find booooring (read in Scot Pilgrim voice).

framapotari 3 days ago | parent | prev | next [-]

I find it interesting how you take your experience and generalize it by saying "you" instead of "I". This is how I read your post:

> I don't know but to me this all sounds like the antithesis of what makes programming fun. I don't have productivity goals for hobby coding where I'd have to make the most of your half an hour -- that sounds too much like paid work to be fun. If I have a half an hour, I tinker for a half an hour and enjoy it. Then I continue when I have another half an hour again. (Or push into night because I can't make myself stop.)

Reading it like this makes it obvious to me that what you find fun is not necessarily what other people find fun. Which shouldn't come as a surprise. Describing your experience and preferences as something more is where the water starts getting muddy.

satvikpendem 3 days ago | parent | prev | next [-]

> There are two sorts of projects (or in general, people): artisans, and entrepreneurs. The latter see code as a means to an end, possibly monetized, and the former see code as the end in itself.

Me from 9 days ago: https://news.ycombinator.com/item?id=46391392#46398917

krisgenre 3 days ago | parent | prev | next [-]

I have nearly two decades of programming experience which is mostly server side. The other day I wanted a quick desktop (Linux) program to chat with an LLM. Found out about Viciane launcher, then chalked out an extension in react (which I have never used) to chat with an LLM using OpenAI compatible API. Antigravity wrote a bare minimum working extension in a single prompt. I didn't even need to research how to write an extension for an app released only three to five months ago. I then used AI assistance to add more features and polish the UI.

This was a fun weekend but I would have procrastinated forever without a coding agent.

css_apologist 2 days ago | parent | prev | next [-]

LLMs are really showing how different programmers are from one another

i am in your camp, i get 0 satisfaction out of seeing something appear on the screen which i don't deeply understand

i want to feel the computer as i type, i've recently been toying with turning off syntax highlighting and LSPs (not for everyone), and i am surprised at the lack of distractions and feeling of craft and joy it brings me

chrysoprace 3 days ago | parent | prev | next [-]

I think it just depends on the person or the type of project. If I'm learning something or building a hobby project, I'll usually just use an autocomplete agent and leave Claude Code at work. On the other hand, if I want to build something that I actually need, I may lean on AI assistants more because I'm more interested in the end product. There are certain tasks as well that I just don't need to do by hand, like typing an existing SQL schema into an ORM's DSL.

ryang2718 3 days ago | parent | prev | next [-]

I too have found this. However, I absolutely love being able to mock up a larger idea in 30 minutes to assess feasibility as a proof of concept before I sink a few hours into it.

popalchemist 3 days ago | parent | prev | next [-]

Some people build because they enjoy the mechanics. Others build because they want to use the end product. That camp will get from A to B much more easily with AI, because for them it was never about the craft. And that's more than OK.

srcreigh 3 days ago | parent | prev | next [-]

Historically, tinkerers had to stay within an extremely limited scope of what they know well enough to enjoy working on.

AI changes that. If someone wants to code in a new area, it's 10000000x easier to get started.

What if the # of handwritten lines of code is actually increasing with AI usage?

bdcravens 3 days ago | parent | prev | next [-]

The problem with modern web development is that if you're not already doing it everyday, climbing the tree of dependencies just to get to the point where you have something show up on screen can be exhausting, and can take several of those half hour sessions.

Nevermark 2 days ago | parent | prev | next [-]

Is the manual coding part of programming still fun or not? We have a lot of opinions on either side here.

I think the classic division of problems being solved might, for most people, solve this seeming contradiction.

For every problem, X% is solving the necessary complexity of the problem. Taming the original problem, in relation to what computers are capable of doing. With the potential of some relevant well implemented libraries or API’s helping to close that gap.

Work in that scenario rarely feels like wasted time.

But in reality, there is almost always another problem we have to solve, the Y%=(1-X) of the work required for an actual solution that involves wrangling with mismatches in available tools from the problem being solved.

This can be relatively benign, just introducing some extra cute little puzzles, that make our brains feel smart as we successfully win wack-a-mole. A side game that can even be refreshing.

Or, the stack of tools, and their quirks, that we need to use can be an unbounded (even compounding) generative system of pervasive mismatches and pernicious non-obvious, not immediately recognizable, trenches we must a 1000 little bridges, and maybe a few historic bridges, just to create a path back to the original problem. And it is often evident that all this work is an artifact of 1000 less than perfect choices by others. (No judgement, just a fact of tool creation having its own difficulties.)

That stuff can become energy draining to say the list.

I think high X problems are fun to solve. Most of our work goes into solving the original problem. Even finding out it was more complex than we thought feels like meaningful drama and increase the joy of resolving.

High Y problems involve vast amounts of glue code, library wrappers with exception handling, the list in any code base can be significant. Even overwhelm the actual problem solving code. And all those mismatches often hold us back, to where our final solution inevitable has problems in situations we hope never happen, until we can come back for round N+1, for unbounded N.

Any help from AI for the latter is a huge win. Those are not “real” problems. As tool stack change, nobody will port Y-type solutions forward. (I tell myself so I can sleep at night).

So that’s it. We are all different. But what type of acceleration AI gives us on type-Y problems is most likely to feel great. Enabling. Letting us harder on things that are more important and lasting. And where AI is less of a boost, but still a potentially welcome one, as an assistant.

christina97 3 days ago | parent | prev | next [-]

I derive the majority of my hobby satisfaction from getting stuff done, not enjoying the process of crafting software. We probably enjoy quite different aspects of tinkering! LLMs make me have so much more fun.

ranger_danger 3 days ago | parent | prev | next [-]

I think there can be other equally valid perspectives than your own.

Some people have goals of actually finishing a project instead of just "tinkering"... and that's ok. Some say it might even be necessary.

themafia 3 days ago | parent | prev | next [-]

On top of that there's a not insignificant chance you've actually just stolen the code through an automated copyright whitewashing system. That these people believe they're adding value while never once checking if the above is true really disappoints me with the current direction of technology.

LLMs don't make everyone better, they make everything a copy.

The upwards transfer of wealth will continue.

dukeyukey 3 days ago | parent | prev | next [-]

Which is fine, because those things are what makes programming fun for you. Not for others.

schwartzworld 3 days ago | parent | prev | next [-]

What about the boring parts of fun hobby projects?

fartfeatures 3 days ago | parent | prev [-]

You could make the same argument about the printing press. Some people like forming the letters by hand, others enjoy actually writing.

alwillis 3 days ago | parent | next [-]

Actually, the invention of the printing press in 1450 created a similar disruption, economic panic and institutional fear similar to what we're experiencing now:

For centuries, the production of books was the exclusive domain of professional scribes and monks. To them, the printing press was an existential threat.

Job Displacement: Scribes in Paris and other major cities reportedly went on strike or petitioned for bans, fearing they would be driven into poverty.

The "Purity" Argument: Some critics argued that hand-copying was a spiritual act that instilled discipline, whereas the press was "mechanical" and "soulless."

Aesthetic Elitism: Wealthy bibliophiles initially looked down on printed books as "cheap" or "ugly" compared to hand-illuminated manuscripts. Some collectors even refused to allow printed books in their libraries to maintain their prestige.

Sound familiar?

From "How the Printing Press Reshaped Associations" -- https://smsonline.net.au/blog/how-the-printing-press-reshape... and

"How the Printing Press Changed the World" -- https://www.koolchangeprinting.com/post/how-the-printing-pre...

stryan 3 days ago | parent | next [-]

I've seen this argument a few times before and I'm never quite convinced by it because, well, all those arguments are correct. It was an existential threat to the scribes and destroyed their jobs, the majority of printed books are considered less aesthetically pleasing than a properly illuminated manuscript, and hand copying is considered a spiritual act by many traditions.

I'm not sure if I say it's a correct argument, but considering everyone in this thread is a lot closer to being a scribe than a printing press owner, I'm surprised there's less sympathy.

gamewithnoname 3 days ago | parent | next [-]

Exactly.

What makes it even more odd for me is they are mostly describing doing nothing when using their agents. I see the "providing important context, setting guardrails, orchestration" bits appended, and it seems like the most shallow, narrowest moat one can imagine. Why do people believe this part is any less tractable for future LLMs? Is it because they spent years gaining that experience? Some imagined fuzziness or other hand-waving while muttering something about the nature of "problem spaces"? That is the case for everything the LLMs are toppling at the moment. What is to say some new pre-training magic, post-training trick, or ingenious harness won't come along and drive some precious block of your engineering identity into obsolescence? The bits about 'the future is the product' are even stranger (the present is already the product?).

To paraphrase theophite on Bluesky, people seem to believe that if there is a well free for all to draw from, that there will still exist a substantial market willing to pay them to draw from this well.

fartfeatures 3 days ago | parent [-]

Having AI working with and for me is hugely exciting. My creativity is not something an AI can outmode. It will augment it. Right now ideas are cheap, implementation is expensive. Soon, ideas will be more valuable and implementation will be cheap. The economy is not zero sum nor is creativity.

gamewithnoname 2 days ago | parent [-]

[dead]

alwillis 3 days ago | parent | prev | next [-]

The point being missed is the printing press led to tens of millions of jobs and billions of dollars in revenue.

So far, when a new technology is introduced that people were initially afraid of, end up creating a whole new set of jobs and industries.

ako 3 days ago | parent | prev [-]

But the world is better of with the scribes unemployed: ideas get to spread, more people can educate themselves through printed books.

Maybe the world is better off with fewer coders, as more software ideas can materialize into working software faster?

jimbokun 3 days ago | parent | prev [-]

Well the lesson is that for all of us who invested a lot of time and effort to become good software developers the value of our skill set is now near zero.

fartfeatures 3 days ago | parent [-]

Many of the same skills that we honed by investing that time and effort into being good software developers make us good AI prompters, we simply moved another layer of abstraction up the stack.

vehemenz 3 days ago | parent | prev | next [-]

This does seem to be what many are arguing, even if the analogy is far from perfect.

anhner 3 days ago | parent | prev [-]

Exactly! ...If the printing press spouted gibberish every 9 words.

simonw 3 days ago | parent [-]

That was LLMs in 2023.

fragmede 3 days ago | parent [-]

Respect to you. I ran out of energy to correct people's dated misconceptions. If they want to get left behind, it's not my problem.

munksbeer 3 days ago | parent [-]

At some point no-one is going to have to argue about this. I'm guessing a bit here, but my guess is that within 5 years, in 90%+ jobs, if you're not using an AI assistant to code, you're going to be losing out on jobs. At that point, the argument over whether they're crap or not is done.

I say this as someone who has been extremely sceptical over their ability to code in deep, complicated scenarios, but lately, claude opus is surprising me. And it will just get better.

int_19h 3 days ago | parent [-]

> At that point, the argument over whether they're crap or not is done.

Not really, it just transforms into a question of how many of those jobs are meaningful anyway, or more precisely, how much output from them is meaningful.

munksbeer 2 days ago | parent [-]

I don't agree. I've recently started using claude more than dabbling and I'm getting good use out of it.

Not every task will be suitable at the moment, but many are. Give claude lots of direction (I've been creating instructions.txt files) and iterate on those. Ask claude to generate a plan and write it out to a file. Read the file, correct what needs correcting, then get it to implement. It works pretty well, you'll probably be surprised. I'm still doing a lot of thought work, but claude is writing a lot of the actual code.

yomismoaqui 3 days ago | parent | prev | next [-]

It's a little shameful but I still struggle when centering divs on a page. Yes, I know about flexbox for more than a decade but always have to search to remember how it is done.

So instead of refreshing that less used knowledge I just ask the AI to do it for me. The implications of this vs searching MDN Docs is another conversation to have.

jfengel 3 days ago | parent | next [-]

No shame in that. I keep struggling to figure out the point of view of the CSS designers.

They don't think like graphic designers, or like programmers. It's not easy for beginners. It's not aimed at ease of implementation. It's not amenable to automated validation. It's not meant to be generated.

If there is some person for whom CSS layout comes naturally, I have not met them. As far as I can tell their design goal was to confuse everyone, at which they succeeded magnificently.

alwillis 3 days ago | parent [-]

> I keep struggling to figure out the point of view of the CSS designers.

Before 2017, the web had no page layout ability.

Think about it. Before the advent of Flexbox and CSS Grid, certain layouts were impossible to do. All we had were floats, absolute positioning, negative margin hacks, and using the table element for layout.

> They don't think like graphic designers or like programmers. It's not easy for beginners.

CSS is dramatically easier if you write it in order of specificity: styles that affect large parts of the DOM go at the top; more specific styles come later. Known as Inverted Triangle CSS (ITCSS), it has been around for a long time [1].

> It's not aimed at ease of implementation. It's not amenable to automated validation.

If you mean linting or adhering to coding guidelines, there are several; Stylelint is popular [2]. Any editor that supports Language Server Protocol (LSP), like VS Code and Neovim (among others), can use CSS and CSS Variables LSPs [3], [4] for code completion, diagnostics, formatting, etc.

> It's not meant to be generated. Says who? There have been CSS generators and preprocessors since 2006, not to mention all the tools which turn mockups into CSS. LLMs have no problem generating CSS.

Lots of developers need to relearn CSS; the book Every Layout is a good start [5].

[1]: https://css-tricks.com/dont-fight-the-cascade-control-it/

[2]: https://stylelint.io

[3]: https://github.com/microsoft/vscode-css-languageservice

[4]: https://github.com/vunguyentuan/vscode-css-variables

[5]: https://every-layout.dev

naasking 2 days ago | parent [-]

Developers can learn a new programming language in a few weeks to months of just using it. If they can't learn to reliably and predictably use CSS in the same way, then I'd say that makes CSS flawed.

alwillis 2 days ago | parent [-]

> If they can't learn to reliably and predictably use CSS in the same way, then I'd say that makes CSS flawed.

It's not the fault of CSS that most developers don't learn to use it correctly. That's like blaming the bicycle when learning to ride one.

Frankly, it's not a priority for most of them to learn CSS; they don't see it as a "real" programming language; therefore it's not worth their time.

naasking 2 days ago | parent [-]

> It's not the fault of CSS that most developers don't learn to use it correctly. That's like blaming the bicycle when learning to ride one.

It's not like blaming the bicycle, that's the whole point of my analogy to programming languages. Like I said, learning a new programming language in a few weeks of regular use is a common experience. This also happens with bikes, because you can try a few things, lose balance, make a few intuitive adjustments, and iterate easily.

This just doesn't work with CSS. There are so many pitfalls, corner cases and reasoning is non-compositional and highly contextual. That's the complete opposite of learning to ride a bike or learning a new programming language.

You literally do need to read like, a formal specification of CSS to really understand it, and even then you'll regularly get tripped up. People just learn to stick to a small subset of CSS for which they've managed to build a predictable model for, which is why we got toolkits like Bootstrap.

Edit: this also explains why things like Tailwind are popular: it adds a certain amount of predictability and composition to CSS. Using CSS was way worse in the past when browser compatibility was worse, but it's still not a great experience.

simonw 3 days ago | parent | prev | next [-]

Hah, centering divs with flexbox is one of my uses for this too! I can never remember the syntax off the top of my head, but if I say "center it with flexbox" it spits out exactly the right code every time.

If I do this a few more times it might even stick in my head.

robofanatic 3 days ago | parent | prev | next [-]

> Yes, I know about flexbox for more than a decade but always have to search to remember how it is done.

These days I use display: flex; so much that I wish the initial value of the display property in CSS should be flex instead of inline;

barrkel 3 days ago | parent | prev | next [-]

Try tailwind. Very amenable to LLM generation since it's effectively a micro language, and being colocated with the document elements, it doesn't need a big context to zip together.

llmslave2 3 days ago | parent | prev [-]

Surely searching "centre a div" takes less time than prompting and waiting for a response...

duggan 3 days ago | parent | next [-]

Search “centre a div” in Google

Wade through ads

Skim a treatise on the history of centering content

Skim over the “this question is off topic / duplicate” noise if Stack Overflow

Find some code on the page

Try to map how that code will work in the context of your other layout

Realize it’s plain CSS and you’re looking for Tailwind

Keep searching

Try some stuff until it works

Or…

Ask LLM. Wait 20-30 seconds. Move on to the next thing.

duskdozer 12 hours ago | parent | next [-]

Half the reason search engines are so miserable to use anymore is that they've been laden down with so much low quality LLM-generated content.

SchemaLoad 3 days ago | parent | prev | next [-]

The middle step is asking an LLM how it's done and making the change yourself. You skip the web junk and learn how it's done for next time.

duggan 3 days ago | parent [-]

Yep, that’s not a bad approach, either.

I did that a lot initially, it’s really only with the advent of Claude Code integrated with VS Code that I’m learning more like I would learn from a code review.

It also depends on the project. Work code gets a lot more scrutiny than side projects, for example.

Izkata 3 days ago | parent | prev | next [-]

> Search “centre a div” in Google

Aaand done. Very first result was a blog post showing all the different ways to do it, old and new, without any preamble.

stephenr 3 days ago | parent | prev | next [-]

Or, given that OP is presumably a developer who just doesn't focus fully on front end code they could skip straight to checking MDN for "center div" and get a How To article (https://developer.mozilla.org/en-US/docs/Web/CSS/How_to/Layo...) as the first result without relying on spicy autocomplete.

Given how often people acknowledge that ai slop needs to be verified, it seems like a shitty way to achieve something like this vs just checking it yourself with well known good reference material.

duggan 3 days ago | parent [-]

LLMs work very well for a variety of software tasks — we have lots of experience around the industry now.

If you haven’t been convinced by pure argument in 2026 then you probably won’t be. But the great thing is you don’t have to take anyone’s word for it.

This isn’t crypto, where everyone using it has a stake in its success. You can just try it, or not.

stephenr 3 days ago | parent [-]

That's a lot of words to say "trust me bruh" which is kind of poetic given that's the entire model (no pun intended) that LLMs work on.

duggan 3 days ago | parent [-]

Hardly. Just pointing out that water is wet, from my perspective.

But there is an interesting looking-glass effect at play, where the truth seems obvious and opposite on either side.

bitwize 3 days ago | parent | prev [-]

Wait till the VC tap gets shut off.

You: Hey ChatGPT, help me center a div.

ChatGPT: Certainly, I'd be glad to help! But first you must drink a verification can to proceed.

Or:

ChatGPT: I'm sorry, you appear to be asking a development-related question, which your current plan does not support. Would you like me to enable "Dev Mode" for an additional $200/month? Drink a verification can to accept charges.

lenkite 3 days ago | parent | next [-]

Seriously, they have got their HOOKS into these Vibe Coders and AI Artists who will pony up $1000/month for their fix.

bonesss 3 days ago | parent [-]

A little hypothesis: a lot of .Net and Java stuff is mainlined from a giant mega corp straight to developers through a curated certification, MVP, blogging, and conference circuit apparatus designed to create unquestioned corporate friendly, highly profitable, dogma. You say ‘website’ and from the letter ‘b’ they’re having a Pavlovian response (“Azure hosted SharePoint, data lake, MSSQL, user directory, analytics, PowerBI, and…”).

Microsoft’s dedication to infusing OpenAI tech into everything seems like a play to cut even those tepid brains out of the loop and capture the vehicles of planning and production. Training your workforce to be dependent on third-party thinking, planning, and advice is an interesting strategy.

llmslave2 3 days ago | parent | prev | next [-]

Calling it now: AI withdrawal will become a documented disorder.

duskdozer 12 hours ago | parent | next [-]

https://en.wikipedia.org/wiki/Chatbot_psychosis

LinXitoW 3 days ago | parent | prev | next [-]

We already had that happen. When GPT 5 was released, it was much less sycophantic. All the sad people with AI girl/boyfriends threw a giant fit because OpenAI "murdered" the "soul" of their "partner". That's why 4o is still available as a legacy model.

freedomben 3 days ago | parent | prev [-]

I can absolutely see that happening. It's already kind of happened to me a couple of times when I found myself offline and was still trying to work on my local app. Like any addiction, I expect it to cost me some money in the future

duskdozer 12 hours ago | parent | prev | next [-]

Definitely. Right now I can access and use them for free without significant annoyance. I'm a canary for enshittification; I'm curious what it's going to look like.

jckahn 3 days ago | parent | prev | next [-]

Alternatively, just use a local model with zero restrictions.

alwillis 3 days ago | parent | next [-]

The next best thing is to use the leading open source/open weights models for free or for pennies on OpenRouter [1] or Huggingface [2].

An article about the best open weight models, including Qwen and Kimi K2 [3].

[1]: https://openrouter.ai/models

[2]: https://huggingface.co

[3]: https://simonwillison.net/2025/Jul/30/

baq 3 days ago | parent | prev | next [-]

This is currently negative expected value over the lifetime of any hardware you can buy today at a reasonable price, which is basically a monster Mac - or several - until Apple folds and rises the price due to RAM shortages.

master_crab 3 days ago | parent | prev [-]

This requires hardware in the tens of thousands of dollars (if we want the tokens spit out at a reasonable pace).

Maybe in 3-5 years this will work on consumer hardware at speed, but not in the immediate term.

vntok 3 days ago | parent [-]

$2000 will get you 30~50 tokens/s on perfectly usable quantization levels (Q4-Q5), taken from any one among the top 5 best open weights MoE models. That's not half bad and will only get better!

master_crab 3 days ago | parent | next [-]

If you are running lightweight models like deepseek 32B. But anything more and it’ll drop. Also, costs have risen a lot in the last month for RAM and AI adjacent hardware. It’s definitely not 2k for the rig needed for 50 tokens a second

threeducks 3 days ago | parent | prev | next [-]

Could you explain how? I can't seem to figure it out.

DeepSeek-V3.2-Exp has 37B active parameters, GLM-4.7 and Kimi K2 have 32B active parameters.

Lets say we are dealing with Q4_K_S quantization for roughly half the size, we still need to move 16 GB 30 times per second, which requires a memory bandwidth of 480 GB/s, or maybe half that if speculative decoding works really well.

Anything GPU-based won't work for that speed, because PCIe 5 provides only 64 GB/s and $2000 can not afford enough VRAM (~256GB) for a full model.

That leaves CPU-based systems with high memory bandwidth. DDR5 would work (somewhere around 300 GB/s with 8x 4800MHz modules), but that would cost about twice as much for just the RAM alone, disregarding the rest of the system.

Can you get enough memory bandwidth out of DDR4 somehow?

int_19h 3 days ago | parent | prev [-]

That doesn't sound realistic to me. What is your breakdown on the hardware and the "top 5 best models" for this calculation?

vntok 4 hours ago | parent [-]

Look up AMD's Strix Halo mini-PC such as GMKtec's EVO-X2. I got the one with 128GB of unified RAM (~100GB VRAM) last year for 1900€ excl. VAT; it runs like a beast especially for SOTA/near-SOTA MoE models.

fragmede 3 days ago | parent | prev | next [-]

Just you wait until the powers that be take cars away from us! What absolute FOOLS you all are to shape your lives around something that could be taken away from us at any time! How are you going to get to work when gas stations magically disappear off the face of the planet? I ride a horse to work, and y'all are idiots for developing a dependency on cars. Next thing you're gonna tell me is we're going to go to war for oil to protect your way of life.

Come on!

stephenr 3 days ago | parent | next [-]

The reliance on SaaS LLMs is more akin to comparing owning a horse vs using a car on a monthly subscription plan.

prathamtharwani 2 days ago | parent | prev | next [-]

This is a poor analogy. Cars (mostly) don't require a subscription.

llmslave2 3 days ago | parent | prev | next [-]

Can't believe this car bubble has lasted so long. It's gonna pop any decade now!

LinXitoW 3 days ago | parent | prev [-]

I mean, they're taking away parts of cars at the moment. You gotta pay monthly to unlock features your car already has.

stephenr 3 days ago | parent [-]

Just like the comment you replied to this is an argument against subscription model "thing" as a service business models, not against cars.

duggan 3 days ago | parent | prev [-]

I mean sure, that could happen. Either it's worth $200/month to you, or you get back to writing code by hand.

freedomben 3 days ago | parent | prev | next [-]

If only it were that easy. I got really good at centering and aligning stuff, but only when the application is constructed in the way I expect. This is usually not a problem as I'm usually working on something I built myself, but if I need to make a tweak to something I didn't build, I frequently find myself frustrated and irritated, especially when there is some higher or lower level that is overriding the setting I just added.

As a bonus, I pay attention to what the AI did and its results, and I have actually learned quite a bit about how to do this myself even without AI assistance

3 days ago | parent | prev [-]
[deleted]
po84 3 days ago | parent | prev | next [-]

This matches my experience. A recent anecdote:

I took time during a holiday to write an Obsidian plugin 4 years ago to scratch a personal itch as it were. I promptly forgot most of the detail, the Obsidian plugin API and ecosystem have naturally changed since then, and Typescript isn't in my day-to-day lingo.

I've been collecting ideas for new plugins since then while dreading the investment needed to get back up to speed on how to implement them.

I took a couple hours over a recent winter holiday with Claude and cranked out two new plugins plus improvements to the 4 year old bit-rotting original. Claude handled much of the accidental complexity of ramping up that would have bogged me down in the past--suggesting appropriate API methods to use, writing idiomatic TS, addressing linter findings, ...

codebolt 3 days ago | parent | next [-]

Another anecdote: I built my first Android app in less than a dozen hours over the holiday, tailored for a specific need I have. I do have many years of experience with Java, C# and JS (Angular), but have never coded anything for mobile. Gemini helped me figure out how to set up a Kotlin app with a reasonable architecture (Hilt for dependency injection, etc). It also helped me find Material3 components and set up the UI in a way that looks not too bad, especially considering my lack of design skills. The whole project was a real joy to do, and I have a couple of more ideas that I'm going to implement over the coming months.

As a father of three with a busy life, this would've simply been impossible a couple of years ago.

simonw 3 days ago | parent | prev [-]

I'm finding that too. I have old stale projects that I'm hesitant to try and fix because I know it will involve hours of frustrating work figuring out how to upgrade core dependencies.

Now I can genuinely point Claude Code at them and say "upgrade this to the latest versions" and it will do most of that tedious work for me.

I can even have it fill in some missing tests and gaps in the documentation at the same time.

timenotwasted 3 days ago | parent | prev | next [-]

You just described my experience exactly. Especially the personal side project time as a parent. Now after bed I can tinker and have fun again because I can move so much more quickly and see real progress even with only an hour or two to spend every few days.

elliotbnvl 3 days ago | parent | next [-]

Yes! I feel like so many people really fail to appreciate this side of things.

Heck, Suno has gotten me to the point where I play so much more piano (the recording -> polished track loop is very rewarding) that not only did I publish an album to Spotify in my favorite genre, of music that I’m really happy with, I’ve also started to produce some polished acoustic recordings with NO AI involvement. That’s just because I’ve been spending so much more time at the piano, because of that reward loop.

freedomben 3 days ago | parent | next [-]

As someone who is very much in this boat, though with guitar and bass rather than piano, I have really been wanting to get into this. I'm even willing to spend some money on tokens or subscription, but I have no idea how to really get started with it.

Are you willing to go into some more detail about what you do with Suno and how you use it?

elliotbnvl 3 days ago | parent [-]

I use it very simply. I pay for the monthly subscription that gives you 2k credits a month. I record a few song ideas every day, usually 2-3min recordings, using my phone and Apple Voice Memos. I export them as mp3 files and upload those to the Suno app with a very short prompt (my album is made of songs generated via the very simple but slightly weird “house string quartet” prompt that I discovered by accident).

I generate a bunch, pick the ones that sound good, extend them if necessary, and save. Eventually once I have 30ish I can just pick the top winners and assemble an album. It’s drop dead simple.

The only reason I published them is because my family started to get worried that the songs would get “lost,” and at the request of friends also. Not doing it for profit or anything.

The recording is the real prompt: the longer of a recording you create, the more Suno adheres to the structure and tone/rhythm/voicings you choose.

I use the v5 model. Way better than the v4/4.5 models.

dharmatech 3 days ago | parent | prev [-]

What should we search for to hear your album?

elliotbnvl 3 days ago | parent [-]

Thanks for your interest!

My artist name is He & The Machines (yes, it’s a bit on the nose). It’s on Spotify, iTunes, YouTube, and anywhere else you look probably.

The album name is “songs to play at the end of the world”.

vorticalbox 3 days ago | parent | prev [-]

I’ve noticed this too at work.

If keep the change’s focused I can iterate far faster with ideas because it can type faster than I can.

3 days ago | parent [-]
[deleted]
willtemperley 3 days ago | parent | prev | next [-]

> You don't need to carve out 2-4 hours to ramp up any more.

Yes. That used to require difficult decision making: “Can I do this and how long will it take?” was a significant cognitive load and source of stress. This was especially true when it became clear something was going to take days not hours, having expended a lot of effort already.

Even more frustrating was having to implement hacks due to time constraints when I knew a couple more hours would obviate that need.

Now I know within a couple of minutes if something is feasible or not and decision fatigue is much lower.

mands 3 days ago | parent | prev | next [-]

Yep, have seen this myself as previously a manager and now with a young family.

I can make incredible progress on side-projects that I never would have started with only 2-4 hours carved out over the course of a week.

There is a hopefully a Jevon's paradox here that we will have a bloom of side-projects, "what-if" / "if only I had the time" type projects come to fruition.

MattSayar 3 days ago | parent [-]

This is exactly the case. Businesses in the past wouldn't automate some process because they couldn't afford to develop it. Now they can! Which frees up resources to tackle something else on the backlog. It's pretty exciting.

phamilton 3 days ago | parent | prev | next [-]

It all comes back to "Do more because of AI" rather than "Do less because of AI".

Getting back into coding is doing more. Updating an old project to the latest libraries is doing more.

It often feels ambiguous. Shipping a buggy, vibe-coded MVP might be doing less. But getting customer feedback on day one from a real tangible product can allow you to build a richer and deeper experience through fast iteration.

Just make sure we're doing more, not less, and AI is a wonderful step forward.

101008 3 days ago | parent | prev | next [-]

I was very anti AI (mainly because I am scared that I'll take my job). I did a side project that would have took me weeks in just two days. I deployed it. It's there, waiting for customers now.

I felt in love with the process to be honest. I complained my wife yesterday: "my only problem now is that I don't have enough time and money to pay all the servers", because it opened to me the opportunities to develop and deploy a lot of new ideas.

agumonkey 3 days ago | parent | next [-]

Aren't you afraid it's gonna be a race to the bottom ? the software industry is now whoever pays gemini to deploy something prompted in a few days. Everybody can, so the market will be inundated by a lot of people, and usually this makes for a bad market (a few shiny one gets 90% of the share while the rest fight for breadcrumbs)

I'm personally more afraid that stupid sales oriented will take my job instead of losing it to solid teams of dedicated expert that invested a lot of skills in making something on their own. it seems like value inversion

solumunus 3 days ago | parent | next [-]

Anything that can be done in 2 days now with an LLM was low hanging fruit to begin with.

fullstackchris 3 days ago | parent | next [-]

I'll also argue that level of skill depends on what one can make in those two days... it's like a mirror. If you don't know what to ask for, it doesn't know what to produce

agumonkey 3 days ago | parent | prev [-]

I really wonder what long term software engineering projects will become.

baq 3 days ago | parent | next [-]

‘Why were they long term?’ is what you need to ask. Code has become essentially free in relative terms, both in time and money domains. What stands out now is validation - LLMs aren’t oracles for better or worse, complex code still needs to be tested and this takes time and money, too. In projects where validation was a significant percentage of effort (which is every project developed by more than two teams) the speed up from LLM usage will be much less pronounced… until they figure out validation, too; and they just might with formal methods.

agumonkey 3 days ago | parent [-]

some long term projects were due to the tons of details in source code, but some were due to inherent complexity and how to model something that works, no matter what the files content will be

beginnings 3 days ago | parent | prev [-]

anything nontrivial is still long term, nothing has changed

freedomben 3 days ago | parent | prev | next [-]

Yes, I worry about this quite a bit. Obviously nobody knows yet how it will shake out, but what I've been noticing so far is that brand recognition is becoming more important. This is obviously not a good thing for startup yokels like me, but it does provide an opportunity for quality and brand building.

The initial creation and generation is indeed much easier now, but testing, identifying, and fixing bugs is still very much a process that takes some investment and effort, even when AI assisted. There is also considerable room for differentiation among user flows and the way people interact with the app. AI is not good at this yet, so the prompter needs to be able to identify and direct these efforts.

I've also noticed in some of my projects, even ones shipped into production in a professional environment, there are lots of hard to fix and mostly annoying bugs that just aren't worth it, or that take so much research and debugging effort that we eventually gave up and accepted the downsides. If you give the AI enough guidance to know what to hunt for, it is getting pretty good at finding these things. Often the suggested fix is a terrible idea, but The AI will usually tell you enough about what is wrong that you can use your existing software engineering skills and experience to figure out a good path forward. At that point you can either fix it yourself, or prompt the AI to do it. My success rate doing this is still only at about 50%, but that's half the bugs that we used to live with that we no longer do, which in my opinion has been a huge positive development.

vagab0nd 3 days ago | parent | prev | next [-]

My prediction is that software will be so cheap that very soon, economy of scale gives way to maximum customization which means everyone writes their own software. There will be no software market in the future.

agumonkey 3 days ago | parent [-]

Possibly which means devs will have to pivot ... I dont know where though since it would mean most jobs are over and a new economy must be invented

SchemaLoad 3 days ago | parent | prev [-]

I think everyone worries about this. No one knows how it's going to turn out, none of us have any control over it and there doesn't seem to be anything you can do to prepare ahead of time.

zerr 3 days ago | parent | prev | next [-]

As a customer, I don't want to pay for vibe-coded products, because authors also don't have a time (and/or skills) to properly review, debug and fix products.

naasking 2 days ago | parent [-]

They do with AI, that's the point.

3 days ago | parent | prev | next [-]
[deleted]
lelanthran 3 days ago | parent | prev [-]

> I felt in love with the process to be honest. I complained my wife yesterday: "my only problem now is that I don't have enough time and money to pay all the servers", because it opened to me the opportunities to develop and deploy a lot of new ideas.

What opportunities? You aren't going to make any money with anything you vibe coded because, even the people you are targeting don't vibe code it, the minute you have even a risk of gaining traction someone else is going to vibe code it anyway.

And even if that didn't happen you're just reducing the signal/noise ratio; good luck getting your genuinely good product out there when the masses are spammed by vibe-coded alternatives.

When every individual can produce their own software, why do you think that the stuff produced by you is worth paying for?

wcarss 3 days ago | parent | next [-]

That might be true, but it doesn't have to be immediately true. It's an arbitrage problem: seeing a gap, knowing you can apply this new tool to make a new entrant, making an offering at a price that works for you, and hoping others haven't found a cheaper way or won the market first. In other words, that's all business as usual. How does Glad sell plastic bags when there are thousands of other companies producing plastic bags, often for far, far less? Branding, contracts, quality, pricing -- just through running a business. No guarantee it's gonna work.

Vibe-coding something isn't a guarantee the thing is shit. It can be fine. It still takes time and effort, too, but because it can take lot less time to get a "working product", maybe some unique insight the parent commenter had on a problem is what was suddenly worth their time.

Will everyone else who has that insight and the vibe coding skills go right for that problem and compete? Maybe, but, also maybe not. If it's a money-maker, they likely will eventually, but that's just business. Maybe you get out of the business after a year, but for a little while it made you some money.

lelanthran 3 days ago | parent [-]

> That might be true, but it doesn't have to be immediately true. It's an arbitrage problem: seeing a gap, knowing you can apply this new tool to make a new entrant, making an offering at a price that works for you, and hoping others haven't found a cheaper way or won the market first. In other words, that's all business as usual.

I'm hearing what you are saying, but the "business as usual" way almost always requires some money or some time (which is the same thing). The ones that don't (performance arts, for example) average a below-minimum-wage pay!

IOW, when the cost of production is almost zero, the market adjusts very quickly to reflect that. What happens then is that a few lottery ticket winners make bank, and everyone else does it for free (or close to it).

You're essentially hoping to be one of those lottery ticket winners.

> How does Glad sell plastic bags when there are thousands of other companies producing plastic bags, often for far, far less?

The cost of production of plastic bags is not near zero, and the requirements for producing plastic bags (i.e. cloning the existing products) include substantial capital.

You're playing in a different market, where the cost of cloning your product is zero.

There's quite a large difference between operating in a market where there is a barrier (capital, time and skill) and operating in a market where there are no capital, time or skill barriers.

The market you are in is not the same as the ones you are comparing your product to. The better comparison is artists, where even though there is a skill and time barrier, the clear majority of the producers do it as a hobby, because it doesn't pay enough for them to do it as a job.

wcarss 2 days ago | parent [-]

All fair points, I think I agree with your take overall but we might each be focusing on situations involving different levels of capital, time, and skill: I'm imagining situations where AI use brought the barrier down substantially for some entrants, but the barriers still meaningfully exist, while it sounds to me like you're considering the essentially zero barrier case.

My Glad example was off the cuff but it still feels apt to me for the case I mean: the barrier for an existing plastic product producer who doesn't already to also produce bags is likely very low, but it's still non zero, while the barrier for a random person is quite high. I feel vibe coding made individual projects much cheaper (sometimes zero) for decent programmers, but it hasn't made my mom start producing programming projects -- the barrier still seems quite high for non technical people.

lelanthran 2 days ago | parent [-]

I dunno about the Glad bag analogy, and now I'm not sure that the artist analogy applies either.

I think a better analogy (i.e. one that we both agree one) is Excel preadsheets.

There are very few "Excel consultants" available that companies hire. You can't make money be providing solutions in Excel because anyone who needs something that can be done in Excel can just do it themselves.

It's like if your mum needed to sum income and expenditures for a little side-business: she won't be hiring an excel consultant to do write the formulas into the 4 - 6 cells that contain calculations, she'll simply do it herself.

I think vibe coding is going to be the same way in a few years (much faster than spreadsheets took off, btw, which occurred over a period of a decade) - someone who needs a little project management applications isn't going to buy one, they can get one in an hour "for free"[1].

Just about anything you can vibe-code, an office worker with minimal training (the average person in 2026, for example) can vibe-code. The skill barrier to vibe-coding little apps like this is less than the skill barrier for creating Excel workbooks, and yet almost every office worker does it.

--------------------------------------------------------------

[1] In much the same way that someone considers creating a new spreadsheet to be free when they already have Excel installed, people are going to regard the output of LLMs "free" because they are already paying the monthly fee for it.

mirsadm 3 days ago | parent | prev [-]

You're overestimating people's willingness to write code even if they don't have to do it. Most people just don't want to do it even if AI made is easy to do so. Not sure who you're talking to but most people I know that aren't programmers have zero interest in writing their own software even if they could do it using prompts only.

wnevets 3 days ago | parent | prev | next [-]

> AI assistance means you can get something useful done in half an hour, or even while you are doing other stuff. You don't need to carve out 2-4 hours to ramp up any more.

That fits my experience with a chrome extension I created. Instead of having to read the docs, find example projects, etc, I was able to get a working version in less than a hour.

wcarss 3 days ago | parent [-]

I experienced the exact same thing: I needed a web tool, and as far as I could tell from recent reviews, the offerings in the chrome extension store seemed either a little suspicious or broken, so I made my own extension in a little under an hour.

It used recent APIs and patterns that I didn't have to go read extensive docs for or do deep learning on. It has an acceptable test suite. The code was easy to read, and reasonable, and I know no one will ever flip it into ad-serving malware by surprise.

A big thing is just that the idea of creating a non-trivial tool is suddenly a valid answer to the question. Previously, I know would have had to spend a bunch of time reading docs, finding examples, etc., let alone the inevitable farting around with a minor side-quest because something wasn't working, or rethinking+reworking some design decision that on the whole wasn't that important. Instead, something popped into existence, mostly worked, and I could review and tweak it.

It's a little bit like jumping from a problem of "solve a polynomial" to one of "verify a solution for a polynomial".

makeitdouble 3 days ago | parent | prev | next [-]

> lost their personal side project time

Yes !

> moved into management roles

Please stop. Except if "coding" is making PoCs.

If it's actual code that runs important stuffs in production: either one cares enough to understand all the ins and outs and going into managements didn't cut them from coding, either they're only pushing what they see as "good enough" code while their team starts polishing resumes and they probably have a better output doing management.

PS: if you only have half an hour for writing something, will you have 3h rolling it back and dealing with the issues produced when stuff goes sideways ? I really don't get the logic.

simonw 3 days ago | parent [-]

A common policy I've seen from engineering managers who code (and one I've stuck to myself when I've been in engineering management roles) is to avoid writing code that's on the critical path to shipping.

That's means your team should never be blocked on code that you are responsible for, because as an engineering manager you can rarely commit dedicated coding time to unblocking them.

This still leaves space for quite a few categories of coding:

- prototypes and proof of concepts

- internal "nice to have" tools that increase developer quality of life (I ended up hacking on plenty of these)

- helping debug issues

coliveira 3 days ago | parent | prev | next [-]

The good thing about AI is that it knows all the hundreds of little libraries that keep popping up every few days like a never-ending stream. No longer I need to worry about learning about this stuff, I can just ask the AI what libraries to use for something and it will bring up these dependencies and provide sample code to use them. I don't like AI for coding real algorithms, but I love the fact that I don't need to worry about the myriad of libraries that you had to keep up with until recently.

fullstackchris 3 days ago | parent [-]

what "AI" are you speaking of? all the current leading LLMs i know of will _not_ do this (i.e web search for latest libraries) unless you explicitely ask

pixelsort 3 days ago | parent [-]

I'll sometimes ask Claude Sonnet 4.5 for JS and TS library recommendations. Not for "latest" or "most popular". For this case, it seems to love recommending promising-looking code from repos released two months ago with like 63 stars.

atomicnumber3 3 days ago | parent | prev | next [-]

I don't like it. It lets "management" ignore their actual jobs - the ones that are nominally so valuable that they get paid more than most engineers, remember - and instead either splash around in the kiddie pool, or go jump into the adult pool and then almost drown and need an actual engineer to bail them out. (The kiddie pool is useless side project, the adult pool is the prod codebase, and drowning is either getting lost in the weeds of "it compiles and I'm done! Now how do I merge and how do I know if I'm not going to break prod?" or just straight up causing an incident and they're apologizing profusely for ruining the oncall's evening except that both of them know they're gonna do it again in 2 weeks).

I really don't know how often I have to tell people, especially former engineers who SHOULD KNOW THIS (unless they were the kind of fail-upwards pretenders): the code is not the slow part! (Sorry, I'm not yelling at you, reader. I'm yelling at my CEO.)

jimbokun 3 days ago | parent | prev | next [-]

Now we ALL be project managers! Hooray!

duskdozer 12 hours ago | parent [-]

I think this is probably the disconnect between me and heavy LLM-users. I find the process of asking them to generate code to be overall much more frustrating than just writing code myself.

elliotbnvl 3 days ago | parent | prev | next [-]

Yes! I’ve seen this myself, folks moving back into development after years or decades.

imiric 3 days ago | parent | next [-]

They're not moving back into development. They're adopting a new approach of producing software, which has nothing to do with the work that software developers do. It's likely that they "left" the field because they were more interested in other roles, which is fine.

So now that we have tools that promise to offload the work a software developer does, there are more people interested in simply producing software, and skipping all of that "busy work".

The idea that this is the same as software development is akin to thinking that assembling IKEA furniture makes you a carpenter.

simonw 3 days ago | parent | next [-]

That IKEA analogy is pretty good, because plenty of people use IKEA furniture to solve the "I need a bookshelf" problem - and often enjoy the process - without feeling like they should call themselves a carpenter.

I bet there are professional carpenters out there who occasionally assemble an IKEA bookshelf because they need something quick and don't want to spend hours building one themselves from scratch.

imiric 3 days ago | parent [-]

Definitely. I'm not disparaging the process of assembling IKEA furniture, nor the process of producing software using LLMs. I've done both, and they have their time and place.

What I'm pushing back on is the idea that these are equivalent to carpentry and programming. I think we need new terminology to describe this new process. "Vibe coding" is at the extreme end of it, and "LLM-assisted software development" is a mouthful.

Although, the IKEA analogy could be more accurate: the assembly instructions can be wrong; some screws may be missing; you ordered an office chair and got a dining chair; a desk may have five legs; etc. Also, the thing you built is made out of hollow MDF, and will collapse under moderate levels of stress. And if you don't have prior experience building furniture, you end up with no usable skills to modify the end result beyond the manufacturer's original specifications.

So, sure, the seemingly quick and easy process might be convenient when it works. Though I've found that it often requires more time and effort to produce what I want, and I end up with a lackluster product, and no learned skills to show for it. Thus learning the difficult process is a more rewarding long-term investment if you plan to continue building software or furniture in the future. :)

elliotbnvl 3 days ago | parent | prev | next [-]

Little bit of a sweeping generalization there. There are a huge range of ways in which LLMs are being leveraged for software development.

Using a drill doesn’t make you any less of a carpenter, even if you stopped using a screwdriver because your wrists are shot.

bitwize 3 days ago | parent | prev [-]

It's called being a systems analyst or product manager. Upskill into these roles (while still accepting individual contributor pay) or get left behind.

pianopatrick 3 days ago | parent | next [-]

Do you see any reason why AI and software will not soon take over system analyst or product manager roles? If we can go from natural language prompt to working code, it seems like not too big of a step to set up a system that goes straight from user feedback to code changes.

imiric 3 days ago | parent | prev [-]

I'm sorry, "upskill"? The roles you mentioned don't require any more advanced skills than those required for software development—just a different set of skills.

And an IC is not "left behind" if those roles don't interest them. What a ridiculous thing to say. A systems analyst or product manager is not a natural progression for someone who enjoys software development.

beaker52 3 days ago | parent | prev [-]

Only it’s a bit like me getting back into cooking because I described the dish I want to a trainee cook.

simonw 3 days ago | parent | next [-]

Depends on how you're using the LLMs. It can also be like having someone else around to chop the onions, wash the pans and find the ingredients when you need them.

CuriouslyC 3 days ago | parent | prev | next [-]

The head chefs at most restaurants delegate the majority of details of dishes to their kitchen staff, then critique and refine.

peteforde 3 days ago | parent | prev | next [-]

This approach seems to have worked out for both Warhol and Chihuly.

elliotbnvl 3 days ago | parent | prev | next [-]

As long as you get the dish you want when before you couldn’t have it — who cares?

beaker52 3 days ago | parent [-]

Sure, as long as you don’t expect me to digest it, live with it, and crap it out for you, I see no problem with it.

elliotbnvl 3 days ago | parent | next [-]

My expectations don’t change whether or not I’m using AI, and neither do my standards.

Whether or not you use my software is up to you.

peteforde 3 days ago | parent | prev [-]

So you're saying that if you go to any famous restaurant and the famous face of the restaurant isn't personally preparing your dinner with their hands and singular attention, you are disappointed.

Got it.

esafak 3 days ago | parent | prev | next [-]

Are you even cooking if you did not collect your own ingredients and forge your own tools??

maplethorpe 3 days ago | parent | prev | next [-]

Isn't that still considered cooking? If I describe the dish I want, and someone else makes it for me, I was still the catalyst for that dish. It would not have existed without me. So yes, I did cook it.

beaker52 3 days ago | parent | next [-]

Work harder!

Now I’m a life coach because I’m responsible for your promotion.

maplethorpe 3 days ago | parent | next [-]

Ok, maybe my analogy wasn't the best. But the point I was trying to make is that using AI tools to write code doesn't meant you didn't write the code.

hackable_sand 3 days ago | parent | prev [-]

Very apt analogy. I'm still waiting for my paycheck.

krapp 3 days ago | parent | prev | next [-]

> If I describe the dish I want, and someone else makes it for me, I was still the catalyst for that dish. It would not have existed without me. So yes, I did "cook" it.

The person who actually cooked it cooked it. Being the "catalyst" doesn't make you the creator, nor does it mean you get to claim that you did the work.

Otherwise you could say you "cooked a meal" every time you went to MacDonald's.

elliotbnvl 3 days ago | parent [-]

Why is the head chef called the head chef, then? He doesn’t “cook”.

9rx 3 days ago | parent | next [-]

To differentiate him from the "cook", which is what we call those who carry out the actual act of cooking.

elliotbnvl 3 days ago | parent [-]

Well, don’t go around calling me a compiler!

9rx 3 days ago | parent [-]

If that's what you do, then the name is perfectly apt. Why shy away from what you are?

beaker52 3 days ago | parent | prev | next [-]

The difference is that the head chef can cook very well and could do a better job of the dish than the trainee.

krapp 3 days ago | parent | prev [-]

"head chef" is a managerial position but yes often they can and do cook.

mock-possum 3 days ago | parent | prev [-]

I would argue that you technically did not cook it yourself - you are however responsible for having cooked it. You directed the cooking.

9rx 3 days ago | parent | prev [-]

Flipping toggle switches went out of fashion many, many, many years ago. We've been describing to trainees (compilers) the dish we want for longer than most on HN have been alive.

beaker52 3 days ago | parent [-]

Actually, we’ve been formally declaring the logic of programs to compilers, which is something very different.

beaker52 3 days ago | parent | next [-]

(Replying to myself because hn)

That’s not the only difference at all. A good use of an LLM might be to ask it what the difference between using an LLM and writing code for a compiler is.

9rx 3 days ago | parent [-]

Equally a good use for a legacy compiler that compiles a legacy language. Granted, you are going to have to write a lot more boilerplate to see it function (that being the difference, after all), but the outcome will be the same either way. It's all just 1s and 0s at the end of the day.

beaker52 3 days ago | parent [-]

Sorry friend, if you can’t identify the important differences between a compiler and an LLM, either intentionally or unintentionally (I can’t tell), then I must question the value of whatever you have to say on the topic.

9rx 3 days ago | parent [-]

The important difference is the reduction in boilerplate, which allows programs to be written with (often) significantly less code. Hence the time savings (and fun) spoken of in the original article.

This isn't really a new phenomenon. Languages have been adding things like arrays and maps as builtins to reduce the boilerplate required around them. The modern languages of which we speak take that same idea to a whole new level, but such is the nature of evolution.

beaker52 3 days ago | parent [-]

No, when we write code it has a an absolute and specific meaning to the compiler. When we write words to an LLM they are written in a non-specific informal language (usually English) and processed non-deterministically too. This is an incredibly important distinction that makes coding, and asking the LLM to code, two completely different ball games. One is formal, one is not.

And yes, this isn’t a new phenomenon.

9rx 2 days ago | parent [-]

It's different in some ways (such is evolution), but is not a distinction that matters. Kind of like the difference between imperative and declarative programming. Different language models, but all the same at the end of the day.

polyamid23 2 days ago | parent [-]

I hope you are joking.

9rx a day ago | parent [-]

The only other difference mentioned is in implementation, but concepts are not defined by implementation. Obviously you could build a C compiler with neural nets. That wouldn't somehow magically turn everything into something completely different just because someone used a 'novel' approach inside of the black box.

9rx 3 days ago | parent | prev [-]

The only difference is that newer languages have figured out how to remove a lot of the boilerplate.

3 days ago | parent [-]
[deleted]
neuropacabra 3 days ago | parent | prev | next [-]

Amen to that!

kachapopopow 3 days ago | parent | prev | next [-]

I was just getting pretty sick and tired of programming, instead now AI can write the code down while I do the fun things of figuring out how shit works and general device hacking + home projects

bgwalter 3 days ago | parent | prev | next [-]

[flagged]

dang 3 days ago | parent | next [-]

Could you please stop posting cynical and/or curmudgeonly comments to HN? You've been doing it repeatedly, and it destroys the intended spirit of this site.

If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.

bgwalter 3 days ago | parent [-]

How can I express in a non cynical way that I think LLMs are theft? Even if courts decide in the future that they think it is not, it is still a protected opinion in the same manner that some people do not recognize the overturning of Roe v. Wade.

simonw 3 days ago | parent [-]

You could save those opinions for discussions about the legality and ethics of training LLMs on unlicensed data, which crop up here pretty often.

8697656846548 3 days ago | parent | prev [-]

[flagged]

hackable_sand 3 days ago | parent | prev | next [-]

What do LLM's have to do returning to coding?

Just...

...write the code. Stop being lazy.

izacus 3 days ago | parent | prev [-]

Yes, people who were at best average engineers and those that atrophied at their skill through lack of practice seem to be the biggest AI fanboys in my social media.

It's telling, isn't it?