Remix.run Logo
_pdp_ an hour ago

The reason they are long is because these skills are produced mostly by Claude Code and Opus and no sensible human will read these files, let alone build a mental model around them. There is just layers of assumptions that this works - when in reality it doesn't and it is wasteful.

Here is a fun experiment.

Ask any LLM to write something vaguely familiar. For example, ask it "write a fib". Since almost all LLMs are fine tuned on code, I find that all of them will respond with a fibonacci sequence algorithm even-though to a non-programmer "write a fib" means to write an unimportant lie.

So there is compression. You can express an outcome in just 3 vague tokens without going into details what exactly is a fibonacci sequence.

That should be enough to understand that the length of the prompt does not matter. What matters is the right words, frequency and order. You can write two page prompt or two sentence prompt and both can have the same outcome.