| ▲ | fooker 4 days ago | ||||||||||||||||||||||||||||||||||||||||
I’ll go against the prevailing wisdom and bet that clean code does not matter any more. No more than the exact order of items being placed in main memory matters now. This used to be a pretty significant consideration in software engineering until the early 1990s. This is almost completely irrelevant when we have ‘unlimited’ memory. Similarly generating code, refactoring, implementing large changes are easy to a point now that you can just rewrite stuff later. If you are not happy about how something is designed, a two sentence prompt fixes it in a million line codebase in thirty minutes. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | charlieflowers 4 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||
It is an interesting possibility that must be considered. Only time will tell. However I disagree. I think complex systems will still turn into a big ball of mud and AI agents will get just as bogged down as humans when dealing with it. And even though re-build from scratch is cheaper than ever, it can't possibly be done cheaply while also remembering the millions+ of specific characteristics that users will have come to rely on. Maybe if you pushed spec-driven development to the absolute extreme, but i don't think pushing it that far is easy/cheap. Just as the effort to go from 90% unit test coverage to 100% is hard and possibly not worth it, I expect a similar barrier around extreme spec-driven. Clarification: I'm advocating clean code in the generic sense, not Uncle Bob's definition. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | namar0x0309 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
You haven't worked or serviced any engineering systems, I can tell. There are fundamental truths about complex systems that go beyond "coding". Patterns can be experienced in nature where engineering principals and "prevailing wisdom" are truer than ever. I suggest you take some time to study systems that are powering critical infrastructure. You'll see and read about grizzled veterans that keep them alive. And how they are even more religious about clean engineering principals and how "prevailing wisdom" is very much needed and will always be needed. That said there are a lot of spaces where not following wisdom works temporarily. But at scale, it crashes and crumbles. Web-apps are a good example of this. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | iterateoften 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
Garbage in, garbage out. The llm is forced to eat its own output. If the output is garbage, its inputs will be garbage in future passes. How code is structured makes the llm implement new features in different ways. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | DrBazza 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
If you work in finance, you've probably just bankrupted your company. Nanoseconds matter. Clean code tends to equal simple code, which tends to equal fast code. The order of items in memory does matter, as does cache locality. 32Kb fits in L1 cache. If of course you're talking about web apps then that's just always been the Wild West. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | stickfigure 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
I actively use AI to refactor a poorly structured two million line Java codebase. A two-sentence prompt does not work. At all. I think the OP is right; the problem is context. If you have a nicely modularized codebase where the LLM can neatly process one module at a time, you're in good shape. But two million lines of spaghetti requires too much context. The AI companies may advertise million-token windows, but response quality drops off long before you hit the end. You still need discipline. Personally I think the biggest gains in my company will not come from smarter AIs, but from getting the codebase modularized enough that LLMs can comfortably digest it. AI is helping in that effort but it's still mostly human driven - and not for lack of trying. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | raincole 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
In the past ~15 years, there are only two new languages that went from "shiny new niche toy" to "mainstream" status. Rust and Go[0]. This fact alone insinuates that the idea of having unlimited memory or unlimited CPU clocks is just wrong. [0]: And TypeScript, technically. But I'd consider TypeScript a fork of JavaScript rather than a new language. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | saltyoldman 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
I started a side project that was supposed to be 100% vibe coded (because I have a similar view as you). I'm using go and Bubble Tea for a TUI interface. I wanted mouse interaction, etc.. It turns out it defaulted to bubble tea 1.0 (instead of 2.0). The mouse clicks were all between 1 and 3 lines below where the actual buttons were. I kept telling it that the math must be wrong. And then telling it to use Bubble objects to avoid all this crazy math. I am now hand coding the UI because the vibe coded method does not work. I then looked at the db-agent I was designing and I explicitly told it to create SQL using the LLM, and it does. But the ACTUAL SQL that it persists to the project is a separate SQL generator that it wrote by hand. The LLM one that gets displayed on the screen looks perfect, then when it comes down to committing it to the database, it runs an alternative DDL generator with lots of hard coded CREATE TABLE syntax etc... It's actually a beautiful DDL generator, for something written in like 2015, but I ONLY wanted the LLM to do it. I started screaming at the agent. I think when they do take over I might be high up on their hit list. Just anecdata. I still think in a year or two, we'll be right about clean code not mattering, but 2026 might not be that year. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | recroad 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
I think clean architecture matters a lot, even more so than before. I get that you can just rewrite stuff, but that comes with inherent risk, even in the age of agents. Supporting production applications with low MTTR to me is what matters a lot. If you are relying entirely on your agent to identify and fix a production defect, I'd argue you are out at sea in a very scary place (comprehension debt and all that). It is in these cases where architecture and organization matters, so you can trace the calls and see what's broken. I get that largely the code is a black box as less and less people review the details, but you do have to review the architecture and design still, and that's not going away. To me, things like SRP, SOLID, DRY and ever-more important. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | williamdclt 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
Amongst others reasons, one of the reasons for clean code is that it avoids bugs. AIs producing dirty code are producing more bugs, like humans. AIs iterating on dirty code are producing more bugs, like humans. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | CharlieDigital 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
Clean code still matters. If it's easier for a human to read and grasp, it will end up using less context and be less error prone for the LLM. If the entities are better isolated, then you also save context and time when making changes since the AoE is isolated. Clean code matters because it saves cycles and tokens. If you're going to generate the code anyways, why not generate "pristine" code?. Why would you want the agent to generate shitty code? | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | mbesto 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
I've seen enough dirty code (900+ tech diligences over the last 12 years) to know that many businesses are successful in spite of having bad code. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | embedding-shape 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
> a two sentence prompt fixes it in a million line codebase in thirty minutes. Could you please create a verifiable and reproducible example of this? In my experience, agents get slower the larger a repository is. Maybe I'm just very strict with my prompts, but while initial changes in a greenfield project might take 5-10 minutes for each change, unless you deeply care about the design and architecture, you'll reach 30 minute change cycles way before you reach a million lines of code. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | mjr00 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
> I’ll go against the prevailing wisdom and bet that clean code does not matter any more. No more than the exact order of items being placed in main memory matters now. This is a really funny comment to make when the entire Western economy is propped up by computers doing multiplication of extremely large matrices, which is probably the single most obvious CompSci 101 example of when the placement of data in memory is really, really important. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | DarkNova6 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
That works until you have to fix a bug | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | grey-area 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
> If you are not happy about how something is designed, a two sentence prompt fixes it in a million line codebase in thirty minutes. This fantasy is so far from reality with current systems and is unlikely to ever be fulfilled, even if they were a lot more capable. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | devin 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
Funny you should mention that. I just used a two sentence prompt to do something straightforward. It took careful human consideration and 3 rounds of "two sentence" prompts to arrive at the _correct_ transformation. I think you're missing the cost of screwing up design-level decision-making. If you fundamentally need to rethink how you're doing data storage, have a production system with other dependent systems, have public-facing APIs, and so on and so forth, you are definitely not talking about "two sentence prompts". You are playing a dangerous game with risk if you are not paying some of it off, or at the very least, accounting for it as you go. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | bcrosby95 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
I've been working on a client/server game in Unity the past few years and the LLM constantly forgets to update parts of the UI when I have it make changes. The codebase isn't even particularly large, maybe around 150k LOC in total. A single complex change (defined as 'touching many parts') can take Claude code a couple hours to do. I could probably do it in a couple hours, but I can have Claude do it (while I steer it) while I also think about other things. My current guess is that LLMs are really good at web code because its seen a shitload of it. My experience with it in arenas where there's less open source code has been less magical. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | johntash 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
One thought I've had a few times is "well.. this is good enough, maybe a future model will make it better." so I won't completely disagree. But my counter argument is that the generated code can easily balloon in size and then if you ever have to manually figure out how something works, it is much harder. You'll also end up with a lot of dead or duplicated code. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | pocksuppet 4 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||
Actually we're going back to caring about the order of atoms in main memory. When your code has good cache locality and prefetching it can run 100 times faster, no joke. Arranging your program so the data stays in a good cache order is called data-driven design - not to be confused with domain-driven design. | |||||||||||||||||||||||||||||||||||||||||