Remix.run Logo
stavros 4 days ago

I've come to view LLMs as a consulting firm where, for each request, I have a 50% chance of getting either an expert or an intern writing my code, and there's no way to tell which.

Sometimes I accept this, and I vibe-code, when I don't care about the result. When I do care about the result, I have to read every line myself. Since reading code is harder than writing it, this takes longer, but LLMs have made me too lazy to write code now, so that's probably the only alternative that works.

I have to say, though, the best thing I've tried is Cursor's autocomplete, which writes 3-4 lines for you. That way, I can easily verify that the code does what I want, while still reaping the benefit of not having to look up all the APIs and function signatures.

kaptainscarlet 4 days ago | parent | next [-]

I've also had a similar experience. I have become too lazy since I started vibe-coding. My coding has transitioned from coder to code reviewer/fixer vey quickly. Overall I feel like it's a good thing because the last few years of my life has been a repetition of frontend components and api endpoints, which to me has become too monotonous so I am happy to have AI take over that grunt work while I supervise.

latexr 3 days ago | parent | next [-]

> My coding has transitioned from coder to code reviewer/fixer vey quickly. Overall I feel like it's a good thing

Until you lose access to the LLM and find your ability has atrophied to the point you have to look up the simplest of keywords.

> the last few years of my life has been a repetition of frontend components and api endpoints, which to me has become too monotonous

It’s a surprise that so many people have this problem/complaint. Why don’t you use a snippet manager?! It’s lightweight, simple, fast, predictable, offline, and includes the best version of what you learned. We’ve had the technology for many many years.

the_real_cher 3 days ago | parent | next [-]

> Until you lose access to the LLM and find your ability has atrophied to the point you have to look up the simplest of keywords.

I never remembered those keywords to begin with.

Checkmate!

onion2k 3 days ago | parent | prev | next [-]

Until you lose access to the LLM and find your ability has atrophied to the point you have to look up the simplest of keywords.

Devs shouldn't be blindly accepting the output of an LLM. They should always be reviewing it, and only committing the code that they're happy to be accountable for. Consequently your coding and syntax knowledge can't really atrophy like that.

Algorithms and data structures on the other hand...

latexr 21 hours ago | parent [-]

> Devs shouldn't be blindly accepting the output of an LLM.

I agree, they shouldn’t. Yet they are. Not all, of course, but a large enough portion to be a problem. And it’s not just a problem for them, but everyone who has to use what they built.

TuringTest 2 days ago | parent | prev | next [-]

> Until you lose access to the LLM and find your ability has atrophied to the point you have to look up the simplest of keywords.

You can locally run pretty decent coding models such as Qwen3 Coder in a RTX 4090 GPU through LM Studio or Ollama with Cline.

It's a good idea even if they give slightly worse results in average, as you can limit your spending of expensive tokens for trivial grunt work and use them only for the really hard questions where Claude or ChatGPT 5 will excel.

latexr 21 hours ago | parent [-]

Or you could use your brain, which will actually learn and improve.

realharo 3 days ago | parent | prev [-]

>Until you lose access to the LLM and find your ability has atrophied to the point you have to look up the simplest of keywords.

Realistically, that's probably never going to happen. Expecting it is just like the prepper mindset.

DaSHacka 3 days ago | parent [-]

I imagine this is what everyone says about all SaaS services in their "burn investor money to acquire more users" phase before hitting the "enshittify and charge more for the service to become profitable" phase

stavros 4 days ago | parent | prev | next [-]

Yeah, exactly the same for me. It's tiring writing the same CRUD endpoints a thousand times, but that's how useful products are made.

foolserrandboy 4 days ago | parent [-]

I wonder why it’s not the norm to use code generation or some other form of meta programming to handle this boring repetitive work?

stavros 4 days ago | parent | next [-]

Because, like a carpenter doesn't always make the same table, but can be tired of always making tables, I don't always write the exact same CRUD endpoints, but am tired of always writing CRUD endpoints.

js8 3 days ago | parent [-]

I think your analogy shows why LLMs are useful, despite being kinda bad. We need some programming tool to which we can say, "like this CRUD endpoint, but different in this and that". Our other metaprogramming tools cannot do that, but LLMs kinda can.

I think now we have identified this problem (programmers need more abstract metaprogramming tools) and a sort of practical engineering solution (train LLM on code), it's time for researchers (in the nascent field of metaprogramming, aka applied logic) to recognize this and create some useful theories, that will help to guide this.

In my opinion, it should lead to adoption of richer (more modal and more fuzzy) logics in metaprogramming (aside from just typed lambda calculus on which our current programming languages are based). That way, we will be able to express and handle uncertainty (e.g. have a model of what constitutes a CRUD endpoint in an application) in a controlled and consistent way.

This is similar how programming is evolving from imperative with crude types into something more declarative with richer types. (Roughly, types are the specification and the code is the solution.) With a good set of fuzzy type primitives, it would be possible to define a type of "CRUD endpoint", and then answer the question if the given program has that type.

Cthulhu_ 3 days ago | parent | prev | next [-]

Because in practice the API endpoint isn't what takes up the time or LOC, but what's underneath. In fact, there's plenty of solutions to e.g. expose your database / data storage through an API directly. But that's rarely what you really want.

iterateoften 3 days ago | parent | prev [-]

Leaky abstractions. Lots of meta programming frameworks tried to do this over the years (take out as much crud as possible) but it always ends up that there is some edge case your unique program needs that isn’t handled and then it is a mess to try to hack the meta programming aspects to add what you need. Think of all the hundreds of frameworks that try to add an automatic REST API to a database table, but then you need permissions, domain specific logic, special views, etc, etc. and it ends up just easier to write it yourself.

If you can imagine an evolutionary function of noabstraction -> total abstraction oscilating overtime, the current batch of frameworks like Django and others are roughly the local maxima that was settled on. Enough to do what you need, but doesn’t do too much so its easy to customize to your use case.

therein 4 days ago | parent | prev [-]

The lazy reluctance you feel is atrophy in the making. LLMs induce that.

kaptainscarlet 3 days ago | parent [-]

That's my biggest worry, atrophy. But I will cross that bridge when I get to it.

latexr 3 days ago | parent [-]

With atrophy, by the time you get to the bridge you’ll realise it’s too deteriorated to cross and will have to spend a lot of time rebuilding and reinforcing it before you can get to the other side.

Cthulhu_ 3 days ago | parent [-]

That's it, every line of code is an implicit requirement, based on explicit requirements; when you have a codebase that needs to be maintained or replaced, as a developer it's your job to determine which of the implicit requirements in code are explicit requirements for what the application does.

I do think that in a few years time, next generation coding LLMs will read current-generation LLM generated code to improve on it. The question is whether they're smart enough to ignore the implicit requirements in the code if they aren't necessary for the explicit ones.

(this comment makes sense in my head)

Most if not all of my professional projects have been replacing existing software. In theory, they're like-for-like, feature-for-feature rewrites. In practice, there's an MVP of must-have features which usually is only a fraction of the features (implicit or explicit) of the application it replaced, with the rewrite being used as an opportunity to re-assess what is actually needed, what is bloat over time, and of course to do a redesign and re-architecture of the application.

That is, rewriting software was an exercise in extracting explicit features from an application.

lukaslalinsky 3 days ago | parent | prev | next [-]

There is a way of doing things that sits between autocomplete and vibe coding. If you use these tools effectively, you learn how to prop them up with context, so make sure they won't start imaging how things should work, then you make it plan a solution, and if you have the time, you watch it implement it and approve as you go. Sometimes you stop it, make correction, and you go on. While it's doing its thing, you can plan the next task. Sometimes I do it in larger chunks, where I auto-accept changes/tool uses, but even in that, I only allow it to do one small task as a time, something that takes me minutes to review.

talles 4 days ago | parent | prev | next [-]

> Since reading code is harder than writing it,

Reading bad code is harder than writing bad code. Reading good code is easier than writing good code.

stavros 4 days ago | parent [-]

I beg to differ.

jagged-chisel 4 days ago | parent | next [-]

No need to beg. Everyone’s got their opinion. I just wish, this being Hacker News, that more people would articulate their different opinions instead of just stopping with “I disagree.”

stavros 4 days ago | parent [-]

Well, my first comment said "reading code is harder than writing code", your comment said "reading good code is easier than writing good code". I believe the two points are about equally articulated.

jagged-chisel 3 days ago | parent [-]

Neither comment is mine. I’m here in the outside wanting to understand the arguments you have in your heads. Sure, the two comments you mention are equally under-articulated. Either continue the discussion for the benefit of others on the site, or leave it as it stands. Stating “I beg to differ” is pointless.

stavros 3 days ago | parent [-]

This comment put it well: https://news.ycombinator.com/item?id=45089978

fnordpiglet 3 days ago | parent | prev | next [-]

This is the sign of seniority IMO. First you learn to write code. Then you learn to write code that can be read. Then you learn to modify code. Then you learn to read other people’s code. Then you learn to modify other people’s code. Then you learn to own code regardless of who reads or writes it.

At this point in my career 35 years in I find reading and writing code whether I wrote it or other did irrelevant. Bad or good code, it’s all the same. By far the most effective work I do involves reading a lot of complex code written by many people over many years and seeing the exact one line to change or improve.

I find LLM assisted coding very similar frankly. I’ve finished maybe 20 projects or more on the last seven months on my own time that I never would have been able to do in my lifetime for want of free time to learn minutia in stuff I am not familiar with. The parts it get hung up on I’m able with quick inspection to recognize and unwedge it, just like any junior engineer. The junior engineers also are often much better versed in XYZ library than I am.

theshrike79 3 days ago | parent | next [-]

This is the thing.

LLM assisted coding ("vibe coding") is just project management.

You ask it to do things, then you check the work to a sufficient degree.

The better the specifications and documentation you give it, the better the result will be. Keeping tasks short and verifiable also helps a lot.

I've written SO many small tools for myself during the last year it's not even funny. Upgraded some shitty late night Python scripts to proper Go applications with unit tests and all, while catching up on my TV shows.

Converted my whole rat's nest of Docker compose files to a single Opentofu declarative setup.

None of this would've gotten done without an LLM assistant.

fnordpiglet 2 days ago | parent [-]

Funny I end up working on 5-6 things at once that are fairly varied. My favorite rats nest is rebuilding by DIY NAS as a NixOS declaration so I can rebuild the filer root from GitHub.

ontouchstart 3 days ago | parent | prev [-]

I’m at the same point as well. Doing more reading than writing.

Just want to add one more point: code is not fiction or even nonfiction, “good or bad” style can be subjective, but correct or not correct is not, regardless of the reviewer’s mental model.

The difficulty of reading code is to understand its logic and logical consequences. The more complex the codebase (not just the line we are reading), the more risky to modify it.

That is why I use LLM to write a lot of tests and let it review the logs to help me understand the logic. Even the tests can be disposable.

AlexCoventry 3 days ago | parent | prev [-]

Why is reading code harder than writing it?

blackoil 3 days ago | parent [-]

I think it has to do with mental model. If you already know what to write and it is reasonably complex you'll have a mental model ready and can quickly write it down (now even faster as LLMs autocomplete 3-4 lines at a time). While reading someone else code you'll have to constantly map the code in your mind with code written and have to then compare quality, security and other issues.

stavros 3 days ago | parent | next [-]

Yeah, it's exactly this. Having to create a mental model from the code is much harder than having one and just writing it out.

AlexCoventry 3 days ago | parent [-]

I just tend to find LLM code output extremely to read, I guess. It tends to be verbose and do a lot of unnecessary stuff, but I can always get the point easily and edit accordingly.

theshrike79 3 days ago | parent | prev [-]

I'd say just reading your own code from a few years back will be as hard as reading someone else's.

_fat_santa 3 days ago | parent | prev | next [-]

> I have a 50% chance of getting either an expert or an intern writing my code

The way I describe it is almost gambling with your time. Every time I want to reach for the Cline extension in VSCode, I always ask myself "if this gamble worth it?" and "what are my odds for this gamble?".

For some things like simple refactoring I'm usually getting great odds so I use AI, but I would say at least 5-6 times last week I've thought about it and ended up doing it by hand as the odds were not in my favor.

One thing I've picked up using AI over the past few months is this sense of what it can and can't do. For some things I'm like "yeah it can do this no problem" but for other tasks I find myself going "better do this by hand, AI will just fuck it up"

fibonachos 4 days ago | parent | prev | next [-]

The multi-line autocomplete feels like the right middle ground (for me) when working in an existing code base with well established patterns. Adding new functionality is mostly a matter of scaffolding, commenting as I go, typing the first few characters of a code block and tabbing my way through the bulk of the code.

MangoCoffee 4 days ago | parent | prev | next [-]

>When I do care about the result, I have to read every line myself.

isn't that the same as delegated task to jr developer but you still have to check their work as sr?

stavros 4 days ago | parent | next [-]

It is, but not the same as if a senior developer were writing it. I would feel much less like I have to check it then.

4 days ago | parent | prev [-]
[deleted]
ryandrake 4 days ago | parent | prev [-]

I still have doubts but I am warming up to Claude Code. Sometimes it gets in these ruts where it tries something wrong, gets shown it's wrong, keeps digging, and can't even explain how its failed suggestions could even work. Other times, it seems to reproduce the kinds of insights (and even judgment) that I'd expect out of an actual practitioner. After suggesting code to fix problems, I always ask it to explain the context in more detail. I was recently using it help to develop a Qt C++ application, and ended up getting this interaction from it:

> Can you speculate on why this problem was only happening with these embedded buttons? Is this a flaw in Qt, or is this application trying to do something unusual?

  Why this problem occurred: This is a Qt design issue, not a flaw per se. Qt's item delegate editing model assumes editors are simple widgets (line edits, spin boxes, combo boxes) that don't spawn modal dialogs. When a modal dialog opens:

  1. The modal dialog steals focus from the editor button
  2. Qt's QAbstractItemView sees the FocusOut event and assumes editing is complete
  3. It calls QStyledItemDelegate::destroyEditor() to clean up the editor
  4. The button gets deleted while the color dialog is still running
  5. When the color dialog closes, it tries to interact with the deleted button

  This application is doing something unusual - using a button that spawns a modal dialog as a table cell editor. Most Qt delegates use immediate editing widgets (text fields, dropdowns) that don't require modal interactions.

  The eventFilter solution works by intercepting the FocusOut event that triggers editor destruction, essentially telling Qt "I'm still editing, don't destroy me yet."
> Is there a danger of leaked Qt resources now that we are blocking destruction?

  Yes, that's a valid concern. By blocking FocusOut events, we're preventing Qt's normal editor cleanup mechanism. However, the risk is likely minimal because:

  1. Qt still cleans up eventually - The editor will be destroyed when the table view itself is destroyed or when editing truly ends (e.g., clicking elsewhere, pressing Enter/Escape)
  2. Short-lived editors - These editor widgets are typically very short-lived (created when editing starts, destroyed when editing ends)
  3. Parent widget cleanup - Since the button has a parent widget, Qt's parent-child ownership will clean it up when the parent is destroyed

  To be completely safe, we could add a more targeted approach - only block FocusOut events while a color dialog is actively open, rather than blocking all FocusOut events. We could track when the modal dialog is shown/hidden and only filter during that window.

  But for this use case, the current solution should be fine since:
  - Color picking is infrequent
  - The editor lifetime is still bounded by the table view's normal lifecycle
  - Qt's parent-child cleanup provides a safety net

  The memory usage impact should be negligible compared to the complexity of implementing more sophisticated tracking.
-----

I've definitely had less-satisfying discussions over bugs with actual human senior software engineers than this.

manwe150 3 days ago | parent [-]

It seems to be just playing the “yes and” improv game with you. You might want to also try prompting it against the same suggestions and see if it changes to follow your lead or continues to hold the original opinion.

theshrike79 3 days ago | parent [-]

But was it wrong?