| ▲ | neuralkoi 4 hours ago | |
I'm not familiar with Skills, but looking at the repo I find the amount of decorative code/text as overkill for what amounts to just the following prompt in a bash script (yikes) executing after a commit is run: | ||
| ▲ | alexhans 4 hours ago | parent | next [-] | |
Skills are just a good standard to describe repeatable workflows saving context through progressive disclosure, prompt sharing and, very underused feature, also bound the non deterministic parts with determism (which could be scripts). Conceptually, you should treat them as incremental software instead of magic you grab from others [1] The killer feature is that coding harnesses tend to have SkillBuilder agent skills so creating them becomes very easy and you can evolve them. I recommend you build your own for your particular pain points. Very simple example [2] showing what another user mentioned around "evals" so that you can really achieve good enough correctness for your automation. - [1] https://alexhans.github.io/posts/series/evals/building-agent... - [2] https://alexhans.github.io/posts/series/evals/sketch-to-text... | ||
| ▲ | saidnooneever 3 hours ago | parent | prev [-] | |
most stuff in these tools is just another md file which get spliced into prompt somehow. its how llms work.. this is normal. its also why id recommend people to use claude to build a similar tool for themselve. you will spend some tokens on it and then after save like 90% token costs using your own tool... its really crazy how much less tokens and calls are needed to do meaningful work.... also you can secure/lockdown tool calls better and make the agents tasks retryable, give it failure modes etc. (not if ur laptop dies during agent work its only god and the agent who know what happened to your code.. oh no wait. the agent needs to just spend 100k tokens to remember where it was (great way to spend ur money). | ||