Remix.run Logo
notarobot123 5 hours ago

At this point, it's worth asking whether lots of relatively straightforward verbose code is actually significantly worse than the least code necessary for the problem. Obviously, architecture matters. What might matter less is verbosity.

The reason we aimed for minimal "accidental complexity" up to now was directly related to the cost/pain of changing and maintaining that code. Hasn't the economics of maintenance and change shifted so much that accidental complexity isn't actually all that expensive/painful?

I think a bit of refactoring, renaming and restructuring has been helpful for maintainability but recently I've been a little less inclined to worry about the easy readability of function bodies and fine implementation details. It still feels wrong but I can't justify the effort anymore.

torben-friis 4 hours ago | parent | next [-]

>Isn't the economics of maintenance and change shifted so much that accidental complexity isn't actually all that expensive/painful?

Not while context windows cause decay and larger bills.

The AI's max cognitive load C is larger than a human's, but if codebase size grows unbounded the minimum context needed for a change will eventually surpass C.

It is also a bad idea to let your codebase become only readable by a machine when we are still in the dark about the role machines and people will take in the future. What if you have to go back to manual dev in a now gargantuan codebase?

bartread 25 minutes ago | parent | prev | next [-]

A problem I’ve found is that when you’re adding functionality or refactoring it often leaves unused methods or types behind, at least with multiple devs working on the same codebase.

This unused code gets further modified as time goes on: new functionality is wired in, or it gets further refactored. Usually it’ll still have tests that cover it. It gives the impression of being live code, but it’s not: it’s zombified.

So you get situations where it gets wired up to something and then that something doesn’t work and you wonder why and so you start digging about and you discover it’s because it has been wired into a path that is never executed.

The fog of relatively recent changes sometimes makes it hard to figure out if the code should be unused or if someone just forgot to hook it in as part of a bigger piece of work. Then you find nobody else is really sure either.

So that extra complexity comes at a cost. It can slow you down or trip you up; catch you by surprise.

binary0010 19 minutes ago | parent | prev | next [-]

I don't think people are talking about the least code possible, just not incredibly verbose and inefficient like what you get by default from llms.

For example I have a game I've been working on for a few years, I do stuff like "implement this simple psuedo physics system to make the bot follow the character like so...etc"

After some planning and back and forth.

It returns mostly working code a little odd on some edge case.

But as I've hand coded this thing for years. I could easily look at it. Laugh my ass off, it had multiple classes and around 1k lines of code, all kinds of crazy non performant crap.

The exact thing I needed, I reprogrammed in around 5 lines of very simple code that did exactly what I needed with no edge case weirdness.

Now the vibe coders actually ship that shit. I like to read vibe code games now and again, and there is no possible way those guys are ever shipping a real game, as every single decision is verbose along with the worst performance decisions over and over everywhere.

Sure it can get you some cute little toy projects, but it will absolutely fall apart if you are trying to make real games.

Don't know about saas apps or whatever. Maybe that stuff doesn't matter at all.

davebren 3 hours ago | parent | prev | next [-]

I'm been in a community that makes a lot of cognitive training software. There's some core open source projects that were created without LLMs, but new projects are now mostly created by young people vibe-coding from scratch or forking and modifying the existing projects with an LLM.

The answer to your question is really obvious. The high-effort manually coded projects stick around and the low-effort vibe-coded projects are forgotten about quickly. In the end LLM-driven programming is always going to bring you to a dead-end. There's certain things where I can predict that they're going to fail because it's going to involve certain kinds of complexity they can't and will never be able to deal with. The code gets so bad that even if an expert programmer wanted to make changes it either wouldn't be possible or worth it. A lot of the time the vibecoders are so high off the low-effort sense of empowerment that they don't even realize what they made is completely broken.

Well written software has staying power because it can be understood and built upon. Understanding a problem deeply enough to devise an elegant solution even leads to new possibilities and ideas that will never be conceived with a more superficial understanding.

Trasmatta 5 hours ago | parent | prev | next [-]

> Hasn't the economics of maintenance and change shifted so much that accidental complexity isn't actually all that expensive/painful?

I sincerely believe that extensive accidental complexity will ALSO be bad for AI agents. Their quality will diminish as their context windows get filled up with endless amounts of spaghetti and accidental complexity. I feel like we won't fully start feeling those effects for another year or so.

panflute 4 hours ago | parent [-]

True, yet they have a Moore's Law like growth going for properties like their context windows.. I think the larger problem with letting them be verbose is Occam's razor. The more verbose they are the more variant behavior they will have where any variation that is not strictly necessary is likely to include incorrect behavior.

4 hours ago | parent [-]
[deleted]
fatata123 34 minutes ago | parent | prev [-]

[dead]