| ▲ | phtrivier 2 hours ago | |
Sorry, I was not clear enough. My first word was litteraly "Yes", so I agree that a function like this is a maintenance nightmare for a human. And, sure, the code might not be "optimized" for the LLM, or token efficiency. However, to try and make my point clearer: it's been reported that anthropic has "some developpers won't don't write code" [1]. I have no inside knowledge, but it's possible, by extension, to assume that some parts of their own codebase are "maintained" mostly by LLMs themselves. If you push this extension, then, the code that is generated only has to be "readable" to: * the next LLM that'll have to touch it * the compiler / interpreter that is going to compile / run it. In a sense (and I know this is a stretch, and I don't want to overdo the analogy), are we, here, judging a program quality by reading something more akin to "the x86 asm outputed by the compiler", rather than the "source code" - which in this case, is "english prompts", hidden somewhere in the claude code session of a developper ? Just speculating, obviously. My org is still very much more cautious, and mandating people to have the same standard for code generated by LLM as for code generated by human ; and I agree with that. I would _not_ want to debug the function described by the commentor. So I'm still very much on the "claude as a very fast text editor" side, but is it unreasonnable to assume that anthropic might be further on the "claude as a compiler for english" side ? [1] https://www.reddit.com/r/ArtificialInteligence/comments/1s7j... | ||
| ▲ | heavyset_go 13 minutes ago | parent [-] | |
If that's the case then that's dumb | ||