Remix.run Logo
JackFr 6 days ago

- When we have a report of a failing test before fixing it, identify the component under test. Think deeply about the component and describe its purpose, the control flows and state changes that occur within the component and assumptions the component makes about context. Write that analysis in file called component-name-mental-model.md.

- When ever you address a failing test, always bring your component mental model into the context.

Paste that into your Claude prompt and see if you get better results. You'll even be able to read and correct the LLM's mental model.

siddboots 6 days ago | parent | next [-]

In my experience, complicated rules like this are extremely unreliable. Claude just ignores it much of the time. The problem is that when Claude sees a failing test it is usually just an obstacle to completing some other task at hand - it essentially never chooses to branch out into some new complicated workflow and instead will find some other low friction solution. This is exactly why subagents are effective: if Claude knows to always run tests via a testing subagent, then the specific testing workflow can become that subagent’s whole objective.

fmbb 6 days ago | parent | prev [-]

Anthropic sells this thing called Claude Code, but their customers have to train it to know how to be a programmer?

Junior developers not even out of school don’t need to be instructed to think.

JackFr 6 days ago | parent [-]

> Junior developers not even out of school don’t need to be instructed to think.

Have you trained juniors lately?