Remix.run Logo
nabbed 10 hours ago

I worried this blog post was going to pivot into a marketing pitch for some product, but no, it just describes the issue where the AI tool that generates your code probably won't document its reasons for the choices it makes. That documentation problem exists in the pre-AI era too, except that the reasons might exist in the heads of your co-workers and could possibly be teased out.

I know nothing about AI code generation (or about AI in general), but I wonder if you could include in your prompt a request that the AI describe the reasons for its choices and actually include those reasons as comments in the code.

jnovek 9 hours ago | parent | next [-]

The AI can't really describe its reasoning, though. It can only look at its context history and find a justification (which it will then present as reasoning). In my experience asking the model "why did you do that" carries substantial hallucination risk.

larsfaye 4 hours ago | parent | next [-]

Not only can it not describe its reasoning, it can't "remember" if you ask it later; it can only observe what is. Nor can it be consistent; I've had it shift reasoning numerous times as the questioning continues, only to come full circle to its original statement while it apologizes profusely for being misleading.

The model will always be completing the story you start with it. There's no opinion to uncover because there's no experience that occurred. It's impossible to know where your influence ends and the model's factual basis begins.

0gs 9 hours ago | parent | prev | next [-]

True, though I have found that forcing (I use an agent skill to do this) an LLM's agent to document the reasoning behind each "decision" it makes seems to lead to better decision-making. Or at least, more justifiable decisions (even if the justification is bad).

dalmo3 9 hours ago | parent | prev [-]

While you're technically correct, I found that a simple "give me the strongest arguments for and against this, cite your sources" works wonders.

localhoster 9 hours ago | parent | prev | next [-]

I think the issue that majority of agentic developers don't understand their design fully. Those holes, perviously, were have to be closed by a person eventually, and inherently you gain understanding in the process.

This is not the case eny more. I never thought LLMs write bad solutions, but when you let it think for you you loose something important, understanding. And when something brekas, some people consider this being a proof that the llm is the problem. And in a sense it is. But you are also to blame. And that's exactly the issue with this all industry - in order to move faster you don't need to "type" faster. A person bashing his hands on the keyboard randomly at top velocity can get pretty high wpm. In order to move master you need to _think_ faster, be _snappier_ and _sharper_ and most people aren't.

I like how one of my colleagues phrased it in a company meeting. He asked one of the c levels "for llm s to 10x me, I need to let them make all decitions and dictate intentions. By percentage, how much of the code you want me to _know_?"

andybak 9 hours ago | parent | prev | next [-]

Isn't one of the common criticisms of AI code that it's a bit too keen to write extensive comments - even when they probably aren't needed?

rectang 9 hours ago | parent | next [-]

The default for Claude today is "no comments". Here's a conversation I had with Claude about it recently:

---

Me: I've notice that Claude is deleting my comments and tends not to insert many of its own. I don't want to go too crazy, but can we modify CLAUDE.md to dial up the comments a little? Like, try not to delete comments that are still relevant, and add a few explanatory comments.

Claude: [... proposal for modifying CLAUDE.md ...]

Claude: Want me to apply this, tweak the wording, or place it differently? Note that your global ~/.claude/CLAUDE.md-style instructions (visible to me as system guidance) currently say "default to writing no comments" — the project-level rule above will override that for this repo, which is what you want.

Me: Where is this rule that says "default to writing no comments"? That seems crazy?

Claude: That text is part of the system prompt I'm given by Claude Code itself — it's not in any file in your project or in your ~/.claude/ config. It's baked into the harness's default instructions to the model.

Claude: Specifically, two lines from my system prompt:

Claude: Under "Doing tasks": "Default to writing no comments. Only add one when the WHY is non-obvious..."

Claude: Under "Text output": "In code: default to writing no comments. Never write multi-paragraph docstrings or multi-line comment blocks — one short line max."

mbauman 9 hours ago | parent | prev | next [-]

Well the comments tend to be superfluous "whats" (describing the code itself) instead of the more helpful "whys." And they're almost never the most useful "why nots".

andybak 7 hours ago | parent | next [-]

That's different to my (recent) experience. I get fairly insightful "whys". At least on bugfix commits.

idle_zealot 9 hours ago | parent | prev [-]

> And they're almost never the most useful "why nots".

Really? I find that Claude really likes to write "why nots" in comments when iterating on implementations and fixing bugs, to the extent that the comments grow into spot-logs of overly-specific documentation of what was tried and why it was scrapped.

Semaphor 9 hours ago | parent | prev [-]

IME the comments are not useful though. I'm not the biggest fan of AI code, but the codes still way higher quality than the "this is what this does" comments

ddosmax556 9 hours ago | parent | prev | next [-]

Perfectly possible IME, it just requires more time. Even if you understand all the (important) parts of your code and validate it, Ai still helps with productivity a lot. It's just not as fast as blindly vibe coding - not at first at least. Agents on their own without checks, and superficial prompts without understanding, result in agents either introducing bugs, or if you have an appropriate test harness, painfully slow, in my experience. It requires skill to use an agent effectively.

sanderjd 8 hours ago | parent | prev | next [-]

But I don't relate to "it probably won't document its reasons"... I spend more time telling it that it has over-documented than under-documented.

It always wants to include the specific thing it is fixing in the API docs for methods it touches. No, this is not the place for a changelog of why this method is implemented the way it is! Describe the interface and the contract, these details are non sequiturs and they are a distraction.

zeeveener 9 hours ago | parent | prev | next [-]

Isn't this what good commit messages are intended to address? If a person has gone so far as to allow an agent to write all their code, they're likely having it commit on their behalf too.

Writing a skill / set of rules around what makes a good commit message would encourage the LLM to record it's reasoning (however much we truly consider it to be "reasoning").

rectang 9 hours ago | parent | prev | next [-]

By default, Claude is set to avoid comments. Eventually I got tired of it deleting my own explanatory comments and overrode the behavior with an addition to CLAUDE.md.

I think Claude is just behaving like all those programmers who make a VERY BIG DEAL about how much they HATE HATE HATE comments that might reiterate what the code does so will go to extreme measures to avoid them.

radicalbyte 9 hours ago | parent | prev [-]

Very few developers do that especially on early / key parts of the codebase.