| ▲ | Frannky 5 hours ago | |||||||||||||||||||||||||
I think unless you're doing simple tasks, skills are unreliable. For better reliability, I have the agent trigger APIs that handles the complex logic (and its own LLM calls) internally. Has anyone found a solid strategy for making complex 'skills' more dependable? | ||||||||||||||||||||||||||
| ▲ | selridge 4 hours ago | parent | next [-] | |||||||||||||||||||||||||
In my experience, all text “instruction” to the agent should be taken on a prayer. If you write compact agent guidance that is not contradictory and is local and useful to your project, the agent will follow it most of the time. There is nothing that you can write that will force the agent to follow it all of the time. If one can accept failure to follow instructions, then the world is open. That condition does not really comport with how we think about machines. Nevertheless, it is the case. Right now, a productive split is to place things that you need to happen into tooling and harnessing, and place things that would be nice for the agent to conceptualize into skills. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | plufz 5 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
My only strategy is what used to be called slash-commands but are also skills now, I.e I call them explicitly. I think that actually works quite well and you can allow specific tools and tell it to use specific hooks for security of validation in the frontmatter properties. | ||||||||||||||||||||||||||
| ▲ | chickensong 5 hours ago | parent | prev [-] | |||||||||||||||||||||||||
Is it that the skills aren't being triggered reliably, or that they get triggered but the skill itself is complex and doesn't work as expected? | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||