| ▲ | Arifcodes 3 hours ago | |
The distinction matters: skills that encode your team's domain-specific conventions are useful. Skills the model generates from scratch based on a vague prompt are not. I've been building AI agent systems for clients and the pattern that works is iterative: the agent tries something, you steer it, then you capture what worked as a reusable skill. Not "generate skills before solving" but "distill lessons after solving." The paper tests the former, which nobody experienced actually does. The real value of skills is reducing token burn on repeat tasks. Once you've figured out the right approach, you encode it so next time the model doesn't have to re-derive everything from first principles. It's memoization for reasoning. | ||
| ▲ | bavarianbob 3 hours ago | parent [-] | |
Do you have a working example of a skill reducing tokens on repeat tasks? I'm personally seeing the cost of writing and maintaining skills to be much larger than the tokens I'm saving by doing so. | ||