| ▲ | ryandrake 7 hours ago |
| In my experience, Claude only knows how to spew code. Every problem you want it to solve, it translates into "more code" rather than "less code". You have to very closely code review everything it does, otherwise your codebase is going to just grow and grow, and asymptotically approach 100% debt. I code review everything that Claude produces, and I'd estimate about 90-95% of the time, my reaction is WOW it works but too much code dude, let's take 3 hours to handhold you through simplifying it until nothing more can be removed. |
|
| ▲ | godelski 2 minutes ago | parent | next [-] |
| > let's take 3 hours to handhold you through simplifying it until nothing more can be removed.
This is why I'm unconvinced that AI code makes me faster. Sure, I could produce a million lines an hour but are we running a sprint or a marathon? I don't know about you but I can't sprint a marathon.I think much of the world of software has become incredibly myopic. I get it, it's a lot harder to win a war than it is to win a battle but just usually taking the easy way out is just deferring the costs to your future. Problem is that those costs accrue interest...
Personally? I'm lazy and a cheapskate. When did programmers stop becoming lazy and start becoming lazy? More importantly, why? |
|
| ▲ | notarobot123 6 hours ago | parent | prev | next [-] |
| At this point, it's worth asking whether lots of relatively straightforward verbose code is actually significantly worse than the least code necessary for the problem. Obviously, architecture matters. What might matter less is verbosity. The reason we aimed for minimal "accidental complexity" up to now was directly related to the cost/pain of changing and maintaining that code. Hasn't the economics of maintenance and change shifted so much that accidental complexity isn't actually all that expensive/painful? I think a bit of refactoring, renaming and restructuring has been helpful for maintainability but recently I've been a little less inclined to worry about the easy readability of function bodies and fine implementation details. It still feels wrong but I can't justify the effort anymore. |
| |
| ▲ | torben-friis 4 hours ago | parent | next [-] | | >Isn't the economics of maintenance and change shifted so much that accidental complexity isn't actually all that expensive/painful? Not while context windows cause decay and larger bills. The AI's max cognitive load C is larger than a human's, but if codebase size grows unbounded the minimum context needed for a change will eventually surpass C. It is also a bad idea to let your codebase become only readable by a machine when we are still in the dark about the role machines and people will take in the future. What if you have to go back to manual dev in a now gargantuan codebase? | |
| ▲ | bartread 28 minutes ago | parent | prev | next [-] | | A problem I’ve found is that when you’re adding functionality or refactoring it often leaves unused methods or types behind, at least with multiple devs working on the same codebase. This unused code gets further modified as time goes on: new functionality is wired in, or it gets further refactored. Usually it’ll still have tests that cover it. It gives the impression of being live code, but it’s not: it’s zombified. So you get situations where it gets wired up to something and then that something doesn’t work and you wonder why and so you start digging about and you discover it’s because it has been wired into a path that is never executed. The fog of relatively recent changes sometimes makes it hard to figure out if the code should be unused or if someone just forgot to hook it in as part of a bigger piece of work. Then you find nobody else is really sure either. So that extra complexity comes at a cost. It can slow you down or trip you up; catch you by surprise. | |
| ▲ | binary0010 22 minutes ago | parent | prev | next [-] | | I don't think people are talking about the least code possible, just not incredibly verbose and inefficient like what you get by default from llms. For example I have a game I've been working on for a few years, I do stuff like "implement this simple psuedo physics system to make the bot follow the character like so...etc" After some planning and back and forth. It returns mostly working code a little odd on some edge case. But as I've hand coded this thing for years. I could easily look at it. Laugh my ass off, it had multiple classes and around 1k lines of code, all kinds of crazy non performant crap. The exact thing I needed, I reprogrammed in around 5 lines of very simple code that did exactly what I needed with no edge case weirdness. Now the vibe coders actually ship that shit. I like to read vibe code games now and again, and there is no possible way those guys are ever shipping a real game, as every single decision is verbose along with the worst performance decisions over and over everywhere. Sure it can get you some cute little toy projects, but it will absolutely fall apart if you are trying to make real games. Don't know about saas apps or whatever. Maybe that stuff doesn't matter at all. | |
| ▲ | davebren 3 hours ago | parent | prev | next [-] | | I'm been in a community that makes a lot of cognitive training software. There's some core open source projects that were created without LLMs, but new projects are now mostly created by young people vibe-coding from scratch or forking and modifying the existing projects with an LLM. The answer to your question is really obvious. The high-effort manually coded projects stick around and the low-effort vibe-coded projects are forgotten about quickly. In the end LLM-driven programming is always going to bring you to a dead-end. There's certain things where I can predict that they're going to fail because it's going to involve certain kinds of complexity they can't and will never be able to deal with. The code gets so bad that even if an expert programmer wanted to make changes it either wouldn't be possible or worth it. A lot of the time the vibecoders are so high off the low-effort sense of empowerment that they don't even realize what they made is completely broken. Well written software has staying power because it can be understood and built upon. Understanding a problem deeply enough to devise an elegant solution even leads to new possibilities and ideas that will never be conceived with a more superficial understanding. | |
| ▲ | Trasmatta 5 hours ago | parent | prev | next [-] | | > Hasn't the economics of maintenance and change shifted so much that accidental complexity isn't actually all that expensive/painful? I sincerely believe that extensive accidental complexity will ALSO be bad for AI agents. Their quality will diminish as their context windows get filled up with endless amounts of spaghetti and accidental complexity. I feel like we won't fully start feeling those effects for another year or so. | | |
| ▲ | panflute 4 hours ago | parent [-] | | True, yet they have a Moore's Law like growth going for properties like their context windows.. I think the larger problem with letting them be verbose is Occam's razor. The more verbose they are the more variant behavior they will have where any variation that is not strictly necessary is likely to include incorrect behavior. | | |
| |
| ▲ | fatata123 37 minutes ago | parent | prev [-] | | [dead] |
|
|
| ▲ | joebates 6 hours ago | parent | prev | next [-] |
| Same. Luckily I enjoy the process of refactoring and deleting code is nearly arousing, so I get the initial dopamine rush of wow this works, followed by the dopamine rush of "wow now this is cleaner and works so much better". Keeps me in touch with the codebase too. |
| |
| ▲ | pixelready 6 hours ago | parent | next [-] | | Pruning code is to software engineers what cancelling plans is to introverts :) I think I need to work up a Claude skill named marie-kondo, so that when it breathlessly presents its triumphant solution, I can go “yes, but does it spark joy?” And have it go into an aggressive refactor loop with me. | | | |
| ▲ | suzzer99 6 hours ago | parent | prev [-] | | I question any dev who doesn't get aroused by deleting code. I just removed an entire graphql endpoint - 500 lines of front and back-end code. I may need to be hosed down. | | |
| ▲ | scubbo 2 hours ago | parent | next [-] | | `$JOB` recently introduced the `#red-diffs` Slack channel. I just submitted ` +4 / -28,742`. Pretty proud. | |
| ▲ | devinprater 4 hours ago | parent | prev | next [-] | | Oh my. I may not want to know what selecting all and then pressing Delete would do to you. | |
| ▲ | fragmede 4 hours ago | parent | prev [-] | | Get a room! |
|
|
|
| ▲ | denkmoon 25 minutes ago | parent | prev | next [-] |
| >asymptotically approach 100% debt What to do if you're just one dev in an org of 50? Who are all pushing more and more code every PR? I'm gonna have to leave aren't I :( |
|
| ▲ | runeb 5 hours ago | parent | prev | next [-] |
| A particularly pronounced version of this can often be seen by letting 2 agents review and code in a loop. One agent will find some problems with the code, the other agent will address the review by adding more code. A good human developer might see that the better way to address the review is to backtrack and pick a different approach. The ai agents seem more prone to getting stuck down bad branches of the decision tree. |
|
| ▲ | layoric 2 hours ago | parent | prev | next [-] |
| This was a large part of my problem with Claude code, it is far too eager to get to the code writing. Matt Pocock's skills and Codex I have found to work together quite well. You still have to ensure design/architecture is being followed, and review carefully obviously, but Codex by default seems to look for minimal change approach a lot more than Claude does/ever did. |
|
| ▲ | borski an hour ago | parent | prev | next [-] |
| You can also tell it to specifically focus on removing unnecessary code as a pass, and it does that pretty well. |
| |
| ▲ | joquarky 29 minutes ago | parent [-] | | I do one of these occasionally and it usually finds redundancies and/or inconsistencies to clean up. It's very effective and should be part of any process involving agentic coding. |
|
|
| ▲ | ddesotto 5 hours ago | parent | prev | next [-] |
| I think this is more a by product of the way these models are architected. “One more token” i usually much more likely than a “STOP”. Knowing when to stop and doing more with less is something also very hard for human developers. For me what throws me off most of the time is the structure on the mid-level. It usually makes sense in the loc and maybe project level, but on the file and folder level it just loses reference on what it already has or what it does not need to be too verbose about. |
|
| ▲ | ok_dad 3 hours ago | parent | prev | next [-] |
| Hey that’s my exact experience. I started coding the interfaces by hand which helps with the architecture but you still have to say, “don’t add a bunch of helpers and stuff, stick to filling in the stubs.” Then I only have to spend one hour handholding the clanker to get it perfect. I usually do a lot of manual refactoring as well during that time. |
|
| ▲ | tailscaler2026 6 hours ago | parent | prev | next [-] |
| Of course it writes a lot of code. It gets paid per token. That's guaranteed future income every additional line of technical debt. |
| |
| ▲ | HoldOnAMinute 6 hours ago | parent | next [-] | | Periodically you can also ask it to review the recent changes and see if there is a risk-free way to streamline them. You can also tell it to periodically summarize the "lessons learned" from the recent session(s) | |
| ▲ | embedding-shape 6 hours ago | parent | prev | next [-] | | Then local models shouldn't suffer from the same problems, but they do. They just aren't trained in the direction of "less code == better long-term maintainability" I'd say, rather than some grand "increased-token-usage" conspiracy. You can certainly steer them a bit to reduce the issue parent talks about, but they still go into that direction whenever they can, adding stuff on top of stuff, piling hacks/shim on top of other hacks/shims, just like many human developers :) | | |
| ▲ | bonesss 6 hours ago | parent [-] | | Training data is the masses of code from everyone. Restrict that data to just the best of the best, the tersest of the tersest, and we’d see better output. I don’t think people are sharing that kinda stuff (Jane Street’s gems stay locked up), and even if they did my presumption is that it’d be too narrow and demanding for general audiences. Big hopes for the long future, damned to some degree of mediocrity in the near term mass product. |
| |
| ▲ | layer8 5 hours ago | parent | prev | next [-] | | At some point they’ll introduce “deletion” tokens that cost ten times the regular token price. ;) | |
| ▲ | enraged_camel 5 hours ago | parent | prev [-] | | >> Of course it writes a lot of code. It gets paid per token. I don't buy it. I think a much more likely reason it leans towards adding code is because deleting code carries inherent risk: it can break things in major ways or minor ways or very visibly or invisibly. Adding new code, on the other hand, is a lot safer: the only parts that can break are those the AI touched inside its own working context. So it doesn't have to go down rabbit holes and potentially create bigger and bigger messes. |
|
|
| ▲ | fhub an hour ago | parent | prev | next [-] |
| I'm curious how much you have tuned your CLAUDE.md file. You can get very specific and direct about what your expectations/desires are. You can also have another agent do a critical review with your expectations/desires and feed that back. |
| |
| ▲ | joquarky 25 minutes ago | parent [-] | | Just be careful to not put too much in there or it won't have enough attention left over for the tasks. Look at the doc hub pattern if your {agent}.md file is getting more than ~100 lines. |
|
|
| ▲ | HoldOnAMinute 6 hours ago | parent | prev | next [-] |
| Here's what I do Tell it "Do not change any files yet, just listen." Then we discuss the problem. Then I have it write to a file it's understanding of the change. I review that carefully. Then I let it implement. I approve each change after manually looking at it. I already know what it should be doing. Make smaller changes and check each one carefully before and after. |
| |
| ▲ | dvfjsdhgfv 6 hours ago | parent [-] | | This is a reasonable approach but has nothing to do with what is being pushed on us from all sides. |
|
|
| ▲ | wccrawford 6 hours ago | parent | prev | next [-] |
| I haven't used Claude, just Sweep, Copilot and whatever Jetbrains has. But they've definitely deleted code, not just added it. I know, because they have deleted code that I definitely still needed, and I had to reject those changes and start over on the prompt. |
|
| ▲ | joquarky 39 minutes ago | parent | prev | next [-] |
| It really does want to make everything overcomplicated. I end most of my pre-plan prompts with "KISS - Keep it simple" to keep it mostly under control. I also keep each file under 1000 lines and do a full scan of code and docs for cruft every 20-30 task cycles. Been working on the same project for six montha and glad to say there is minimal bloat. |
|
| ▲ | operatingthetan 6 hours ago | parent | prev | next [-] |
| A lot of people seem to think if you give the agent a framework and clear plans that it spews "good" code. I doubt it though. |
|
| ▲ | fragmede 4 hours ago | parent | prev [-] |
| Try codex. |