| ▲ | hibikir 7 hours ago |
| LLMs don't lack the virtue of laziness: it has it if you want it to, by just having a base prompt that matches intent. I've had good success convincing claude backed agents to aim for minimal code changes, make deduplication passes, and basically every other reasonable "instinct" of a very senior dev. It's not knowledge that the models haven't integrated, but one that many don't have on their forefront with default settings. I bet we've all seen the models that over-edit everything, and act like the crazy mid-level dev that fiddles with the entire codebase without caring one bit about anyone else's changes, or any risk of knowledge loss due to overfiddling. And on Jess' comments on validating docs vs generating them... It's a traditional locking problem, with traditional solutions. And it's not as if the agent cannot read git, and realize when one thing is done first, in anticipation of the other by convention. I'm quite senior: In fact, I have been a teammate of a couple of people mention in this article. I suspect that they'd not question my engineering standards. And yet I've no seen any of that kind of debt in my LLM workflows: if anything, by most traditional forms of evaluating software quality, the projects I work on are better than what they were 5, 10 years ago, using the same metrics as back then. And it's not magic or anything, but making sure there are agents running sharing those quality priorities. But I am getting work done, instead of spending time looking for attention in conferences. |
|
| ▲ | marktani 6 hours ago | parent | next [-] |
| I agree with your sentiment here. However: > if anything, by most traditional forms of evaluating software quality, the projects I work on are better than what they were 5, 10 years ago, using the same metrics as back then. In this side sentence you're introducing so much vagueness. Can you share insights to get some validation on your claim? What metrics are you using and how is your code from 10, 5, 0 years performing? I feel throwing in a vague claim like that unnecessarily dilutes your message and distracts from the point. But, if you do have more to share I'd be curious to learn more. |
| |
| ▲ | __mharrison__ an hour ago | parent | next [-] | | My anecdotes for using LLMs to modernize legacy (20-year-old systems): - 40x speed improvement - Painless env setup - 20 Second deploy - 90+% test coverage - Ability to quickly refactor - Documentation (The original system that I wrote with one other programmer 20 years ago took 1.5+ years to write. Modern rewrite: 2 days) | |
| ▲ | awesome_dude 6 hours ago | parent | prev [-] | | The anecdote the GP is providing there rings true for me too - although I'm not sure if I am going offer better detail. I'm a proponent of architectural styles like MVC, SOLID, hexagonal architecture, etc, and in pre-LLM workflows, "human laziness" often led to technical debt: a developer might lazily leak domain logic into a controller or skip writing an interface just to save time. The code I get the LLM to emit is a lot more compliant with those BUT there is a caveat that the LLMs do have a habit of "forgetting" the specific concerns of the given file/package/etc, and I frequently have to remind it. The "metric" improvement isn't that the LLM is a better architect than a senior dev; it's that it reduces the cost of doing things the right way. The delta between "quick and dirty" and "cleanly architected" has shrunk to near zero, so the "clean" version becomes the path of least resistance. I'm seeing less "temporary" kludges because the LLM almost blindly follows my requests | | |
| ▲ | stickfigure 3 hours ago | parent [-] | | I don't think I'd like your code. But apparently there's enough implied YAGNI in my CLAUDE.md to prevent the unnecessary interfaces and layers of separation that you apparently like. So I guess there is a flavor for everyone. |
|
|
|
| ▲ | dc_giant 7 hours ago | parent | prev | next [-] |
| Mind sharing the instructions you give Claude to go for minimal code changes etc? |
| |
| ▲ | gherkinnn 3 hours ago | parent | next [-] | | I regularly prompt and re-prompt the clanker with esoteric terms like "subtractive changes", "create by removing" and more common phrases like "make the change easy, then make the easy change", "yagni", and "vertical slices", and "WET code is desirable". It mostly works. CC's plan mode creates a plan by cleaning up first, then defining narrow, integrated steps. Mentioning "subtractive" and "yagni" appears to be a reliable enough way for an LLM to choose a minimal path. To my mind these instructions remain incantations and I feel like an alchemist of old. | | |
| ▲ | physicles 2 hours ago | parent [-] | | Was just listening to the Lenny’s Podcast interview with Simon Willison, who mentioned another such incantation: red/green TDD. The model knows what this means and it just does it, with a nice bump in code quality apparently. I’m trying out another, what I call the principle of path independence. It’s the idea that the code should reflect only the current requirements, and not the order in which functionality was added — in other words, if you should decide to rebuild the system again from scratch tomorrow, the code should look broadly similar to its current state. It sort of works even though this isn’t a real thing that’s in its training data. |
| |
| ▲ | awesome_dude 6 hours ago | parent | prev [-] | | I often say to Claude "you're doing X when I want Y, how can I get you to follow the Y path without fail" and Claude will respond with "Edit my claude.md to include the following" which I then ask Claude to do. | | |
| ▲ | sally_glance an hour ago | parent | next [-] | | Not sure this is a great idea. The model only internalized what it was trained on and writing prompts/context for itself isn't part of that. I try to keep my context as clean as possible, mostly today's models seem smart/aligned enough to be steered by a couple of keywords. | |
| ▲ | mettamage 4 hours ago | parent | prev [-] | | Ah yea I do that too. I often have reflection sessions with Claude where I ask it "how can I make sure you do behavior X so we get outcome Y?" It works relatively well but not always. | | |
|
|
|
| ▲ | whattheheckheck an hour ago | parent | prev [-] |
| You should go get attention at conferences. You could write a book called Practical LLM Coding |