Remix.run Logo
phtrivier 5 hours ago

Yes, if it was made for human comprehension or maintenance.

If it's entirely generated / consumed / edited by an LLM, arguably the most important metric is... test coverage, and that's it ?

mdavid626 4 hours ago | parent | next [-]

Oh boy, you couldn't be more wrong. If something, LLM-s need MORE readable code, not less. Do you want to burn all your money in tokens?

grey-area 5 hours ago | parent | prev | next [-]

LLMs are so so far away from being able to independently work on a large codebase, and why would they not benefit from modularity and clarity too?

olmo23 3 hours ago | parent [-]

I agree the functions in a file should probably be reasonably-sized.

It's also interesting to note that due to the way round-tripping tool-calls work, splitting code up into multiple files is counter-productive. You're better off with a single large file.

konart 5 hours ago | parent | prev | next [-]

Can't we have generated / llm generated code to be more human maintainable?

mrbungie 4 hours ago | parent | prev | next [-]

Can't wait to have LLM generated physical objects that explode on you face and no engineer can fix.

phtrivier an hour ago | parent [-]

Oh, do we agree on that. I never said it was "smart" - I just had a theory that would explain why such code could exist (see my longer answer below).

Bayko 5 hours ago | parent | prev [-]

Ye I honestly don't understand his comment. Is it bad code writing? Pre 2026? Sure. In 2026. Nope. Is it going to be a headache for some poor person on oncall? Yes. But then again are you "supposed" to go through every single line in 2026? Again no. I hate it. But the world is changing and till the bubble pops this is the new norm

phtrivier 2 hours ago | parent | next [-]

Sorry, I was not clear enough.

My first word was litteraly "Yes", so I agree that a function like this is a maintenance nightmare for a human. And, sure, the code might not be "optimized" for the LLM, or token efficiency.

However, to try and make my point clearer: it's been reported that anthropic has "some developpers won't don't write code" [1].

I have no inside knowledge, but it's possible, by extension, to assume that some parts of their own codebase are "maintained" mostly by LLMs themselves.

If you push this extension, then, the code that is generated only has to be "readable" to:

* the next LLM that'll have to touch it

* the compiler / interpreter that is going to compile / run it.

In a sense (and I know this is a stretch, and I don't want to overdo the analogy), are we, here, judging a program quality by reading something more akin to "the x86 asm outputed by the compiler", rather than the "source code" - which in this case, is "english prompts", hidden somewhere in the claude code session of a developper ?

Just speculating, obviously. My org is still very much more cautious, and mandating people to have the same standard for code generated by LLM as for code generated by human ; and I agree with that.

I would _not_ want to debug the function described by the commentor.

So I'm still very much on the "claude as a very fast text editor" side, but is it unreasonnable to assume that anthropic might be further on the "claude as a compiler for english" side ?

[1] https://www.reddit.com/r/ArtificialInteligence/comments/1s7j...

heavyset_go 13 minutes ago | parent [-]

If that's the case then that's dumb

yoz-y 3 hours ago | parent | prev [-]

The jury on this one is still out.