Remix.run Logo
Writing a Good Claude.md(humanlayer.dev)
66 points by objcts 3 hours ago | 17 comments
_pdp_ 8 minutes ago | parent | next [-]

There is far much easier way to do this and one that is perfectly aligned with how these tools work.

It is called documenting your code!

Just write what this file is supposed to do in a clear concise way. It acts as a prompt, it provides much needed context specific to the file and it is used only when necessary.

Another tip is to add README.md files where possible and where it helps. What is this folder for? Nobody knows! Write a README.md file. It is not a rocket science.

What people often forget about LLMs is that they are largely trained on public information which means that nothing new needs to be invented.

You don't have to "prompt it just the right way".

What you have to do is to use the same old good best practices.

dhorthy 3 minutes ago | parent [-]

For the record I do think the AI community tries to unnecessarily reinvent the wheel on crap all the time.

sure, readme.md is a great place to put content. But there's things I'd put in a readme that I'd never put in a claude.md if we want to squeeze the most out of these models.

Further, claude/agents.md have special quality-of-life mechanics with the coding agent harnesses like e.g. `injecting this file into the context window whenever an agent touches this directory, no matter whether the model wants to read it or not`

> What people often forget about LLMs is that they are largely trained on public information which means that nothing new needs to be invented.

I don't think this is relevant at all - when you're working with coding agents, the more you can finesse and manage every token that goes into your model and how its presented, the better results you can get. And the public data that goes into the models is near useless if you're working in a complex codebase, compared to the results you can get if you invest time into how context is collected and presented to your agent.

candiddevmike 7 minutes ago | parent | prev | next [-]

None of this should be necessary if these tools did what they say on the tin, and most of this advice will probably age like milk.

Write readmes for humans, not LLMs. That's where the ball is going.

rootusrootus a minute ago | parent | prev | next [-]

[delayed]

andersco 14 minutes ago | parent | prev | next [-]

I have found enabling the codebase itself to be the “Claude.md” to be most effective. In other words, set up effective automated checks for linting, type checking, unit tests etc and tell Claude to always run these before completing a task. If the agent keeps doing something you don’t like, then a linting update or an additional test often is more effective than trying to tinker with the Claude.md file. Also, ensure docs on the codebase are up to date and tell Claude to read relevant parts when working on a task and of course update the docs for each new task. YMMV but this has worked for me.

btbuildem 10 minutes ago | parent | prev | next [-]

It seems overall a good set of guidelines. I appreciate some of the observations being backed up by data.

What I find most interesting is how a hierarchical / recursive context construct begins to emerge. The authors' note of "root" claude.md as well as the opening comments on LLMs being stateless ring to me like a bell. I think soon we will start seeing stateful LLMs, via clever manipulation of scope and context. Something akin to memory, as we humans perceive it.

prettyblocks 10 minutes ago | parent | prev | next [-]

The advice here seems to assume a single .md file with instructions for the whole project, but the AGENTS.md methodology as supported by agents like github copilot is to break out more specific AGENTS.md files in the subdirectories in your code base. I wonder how and if the tips shared change assuming a flow with a bunch of focused AGENTS.md files throughout the code.

0xblacklight 6 minutes ago | parent [-]

Hi, post author here :)

I didn’t dive into that because in a lot of cases it’s not necessary and I wanted to keep the post short, but for large monorepos it’s a good idea

jasonjmcghee 30 minutes ago | parent | prev | next [-]

Interesting selection of models for the "instruction count vs. accuracy" plot. Curious when that was done and why they chose those models. How well does ChatGPT 5/5.1 (and codex/mini/nano variants), Gemini 3, Claude Haiku/Sonnet/Opus 4.5, recent grok models, Kimi 2 Thinking etc (this generation of models) do?

alansaber 23 minutes ago | parent [-]

Guessing they included some smaller models just to show how they dump accuracy at smaller context sizes

jasonjmcghee 11 minutes ago | parent [-]

Sure - I was more commenting that they are all > 6 months old, which sounds silly, but things have been changing fast, and instruction following is definitely an area that has been developing a lot recently. I would be surprised if accuracy drops off that hard still.

eric-burel 31 minutes ago | parent | prev | next [-]

"You can investigate this yourself by putting a logging proxy between the claude code CLI and the Anthropic API using ANTHROPIC_BASE_URL" I'd be eager to read a tutorial about that I never know which tool to favour for doing that when you're not a system or network expert.

0xblacklight 5 minutes ago | parent | next [-]

Hi, post author here

We used cloudflare’s AI gateway which is pretty simple. Set one up, get the proxy URL and set it through the env var, very plug-and-play

fishmicrowaver 15 minutes ago | parent | prev [-]

Have you considered just asking claude? I'd wager you'd get up and running in <10 minutes.

dhorthy a few seconds ago | parent [-]

agree - i've had claude one-shot this for me at least 10 times at this point cause i'm too lazy to lug whatever code around. literally made a new one this morning

vladsh 26 minutes ago | parent | prev [-]

What is a good Claude.md?

testdelacc1 22 minutes ago | parent [-]

Claude.md - A markdown file you add to your code repository to explain how things work to Claude.

A good Claude.md - I don’t know, presumably the article explains.