| ▲ | zmmmmm 6 hours ago | |||||||
I was surprised how long some of these skills are. They are pages and pages long with tables and checkbox lists and code examples, etc. Curious how normal that is - it would only take a couple of these to really fill the context alot. | ||||||||
| ▲ | _pdp_ a minute ago | parent | next [-] | |||||||
The reason they are long is because these skills are produced mostly by Claude Code and Opus and no sensible human will read these files, let alone build a mental model around them. There is just layers of assumptions that this works - when in reality it doesn't and it is wasteful. Here is a fun experiment. Ask any LLM to write something vaguely familiar. For example, ask it "write a fib". Since almost all LLMs are fine tuned on code, I find that all of them will respond with a fibonacci sequence algorithm even-though to a non-programmer "write a fib" means to write an unimportant lie. So there is compression. You can express an outcome in just 3 vague tokens without going into details what exactly is a fibonacci sequence. That should be enough to understand that the length of the prompt does not matter. What matters is the right words, frequency and order. You can write two page prompt or two sentence prompt and both can have the same outcome. | ||||||||
| ▲ | gwerbin 5 hours ago | parent | prev | next [-] | |||||||
I quickly skimmed and it looks like at least a few of them are intended to be more like system prompts for a tightly scoped sub agent than a skill as such. I agree, I wouldn't want to use a lot of of these in a longer-running work session. I have been successful with short and focused skills so far. I treat them as a reusable snippet of context, but small ones. For example a couple of paragraphs at most about how to use Python in my project and how to run unit tests. I also have several short "info" skills that don't actually provide the agent instructions, they merely contain useful contextual information that the agent can choose to pull in if needed. Even having too many skills can be an issue because the list of skill names and their descriptions all end up in the context at some point. | ||||||||
| ▲ | tecoholic 6 hours ago | parent | prev | next [-] | |||||||
I have written zero skills, so not sure how normal it is. I counted the words in couple of them and they seem to be around 2k range. So 5 skills would be around 10K. Even at a small LLM context of 128k, that's still around 10%. And for a 1M context window like the big ones, it barely registers. | ||||||||
| ▲ | umeshunni 3 hours ago | parent | prev | next [-] | |||||||
> it would only take a couple of these to really fill the context alot. Only skill front-matter (name, description, triggers etc) are loaded within context by default, so this isn't likely to happen without 1000s of skills. | ||||||||
| ▲ | sergiotapia 5 hours ago | parent | prev [-] | |||||||
I reviewed the line counts of my own project skill files, and the top 3 I have are:
Maybe I am _too_ conservative here. Lots to explore. | ||||||||
| ||||||||