Remix.run Logo
zozbot234 8 hours ago

The point of so-called 'skills' is to be short how-to reminders that the agent can pull into its context and then act upon. If the knowledge is already in the model, it will most likely be surfaced in reasoning phase anyway, so there's little benefit to writing it up as a skill, unless perhaps it's extremely relevant and hard to surface, and you want the model to skip that part of the reasoning.

awwaiid 5 hours ago | parent | next [-]

I've been building a skill to help run manual tests on an app. So I go through and interactively steer toward a useful validation of a particular PR, navigating specifics of the app and what I care about and what I don't. Then in the end I have it build a skill that would have skipped backtracking and retries and the steering I did.

Then I do it again from scratch; this time it takes less steering. I have it update the skill further.

I've been doing this on a few different tests and building a skill which is taking less and steering to do app-specific and team-specific manual testing faster and faster. The first times through it took longer than manually testing the feature. While I've only started doing this recently, it is now taking less time than I would take, and posting screenshots of the results and testing steps in the PR for dev review. Ongoing exploration!

7thpower 4 hours ago | parent [-]

I love the screenshots, I need to do something like that.

deadbabe 8 hours ago | parent | prev [-]

There is a benefit of a skill though. If an AI keeps encoding common tasks as skills and scripts, the LLM eventually just becomes a dumb routing mechanism for ambiguous user requests, which ultimately drives down token usage.

If everything you want an LLM do is already captured as code or simple skills, you can switch to dumber models which know enough about selecting the appropriate skill for a given user input, and not much else. You would only have to tap into more expensive heavy duty LLMs when you are trying to do something that hasn’t been done before.

Naturally, AI companies with vested interest in making sure you use as many tokens as possible will do everything they can to steer you away from this type of architecture. It’s a cache for LLM reasoning.

zozbot234 8 hours ago | parent [-]

AI companies don't want you to waste tokens, they benefit when you use them efficiently because they can serve more users on the infra that's the main bottleneck for them. It's Jevons' paradox in action.

gruez 6 hours ago | parent | next [-]

>AI companies don't want you to waste tokens, they benefit when you use them efficiently because they can serve more users on the infra that's the main bottleneck for them.

No, the actual incentive is that people will eventually benchmark their models on bang-per-buck basis and models that chew through tokens are not going to be competitive. It's the same reason why the "Intel/AMD are intentionally sandbagging their CPUs so they can sell more CPUs" theory doesn't work.

pixl97 6 hours ago | parent [-]

Well, it only works when one competitor is far enough ahead they can play games like that.

At least currently in AI there is no moat so we wouldn't expect that to be occurring

mhmmmmmm 6 hours ago | parent | prev [-]

I don't think thats necessarily true, they aren't really capacity constrained in practice (they might be behind the scenes and adjust training on the fly, but thats speculation), so wasting tokens effectively helps utilize their (potentially idle) inference GPU's