▲ | Aurornis a day ago | |||||||
> I feel like one could delete almost all comments from that project without losing any information I far from a heavy LLM coder but I’ve noticed a massive excess of unnecessary comments in most output. I’m always deleting the obvious ones. But then I started noticing that the comments seem to help the LLM navigate additional code changes. It’s like a big trail of breadcrumbs for the LLM to parse. I wouldn’t be surprised if vibe coders get trained to leave the excess comments in place. | ||||||||
▲ | cztomsik 15 hours ago | parent | next [-] | |||||||
More tokens -> more compute involved. Attention-based models work by attending every token with each other, so more tokens means not only having more time to "think" but also being able to think "better". That is also at least part of the reason why o1/o3/R1 can sometimes solve what other LLMs could not. Anyway, I don't think any of the current LLMs are really good for coding. What it's good at is copy-pasting (with some minor changes) from the massive code corpus it has been pre-trained. For example, give it some Zig code and it's straight unable to solve even basic tasks. Same if you give it really unique task, or if you simply ask for potential improvements of your existing code. Very, very bad results, no signs of out-of-box thinking whatsoever. BTW: I think what people are missing is that LLMs are really great at language modeling. I had great results, and boosts in productivity, just by being able to prepare the task specification, and do quick changes in that really easily. Once I have a good understanding of the problem, I can usually implement everything quickly, and do it in much much better way than any LLM can currently do. | ||||||||
| ||||||||
▲ | lolinder a day ago | parent | prev | next [-] | |||||||
It doesn't hurt that the model vendors get paid by the token, so there's zero incentive to correct this pattern at the model layer. | ||||||||
| ||||||||
▲ | dkersten 14 hours ago | parent | prev | next [-] | |||||||
What’s worse, I get a lot of comments left saying what the AI did, not what the code does or why. Eg “moved this from file xy”, “code deleted because we have abc”, etc. Completely useless stuff that should be communicated in the chat window, not in the code. | ||||||||
▲ | nostromo 18 hours ago | parent | prev [-] | |||||||
LLMs are also good at commenting on existing code. It’s trivial to ask Claude via Cursor to add comments to illustrate how some code works. I’ve found this helpful with uncommented code I’m trying to follow. I haven’t seen it hallucinate an incorrect comment yet, but sometimes it will comment a TODO that a section should be made more more clear. (Rude… haha) | ||||||||
|