| ▲ | Show HN: Agent-evals – Claude skill to build your own evals(github.com) | |
| 7 points by sauercrowd 10 hours ago | 1 comments | ||
I’ve spent the past 10 years working on AI in finance, with much of that time focused on building evaluation systems for production environments. As agents become more widely adopted, more software engineering and product people have start building them. But I’ve noticed that many teams are not yet fluent in systematic evaluation, or in the processes needed to keep agent quality high over time. For large organizations, that gap is rarely the bottleneck due to dedicated teams. But after speaking with a number of startups, it became clear that building strong, up-to-date evals is much harder in a fast startup, especially when the team does not have a data science background. So I tried to condense as much of my experience as possible into a Claude Skill: a practical starting point for evaluating your agent. The idea is simple: tell Claude you need evals, and it will set up a solid baseline directly in your codebase - that's it! The evals will follow patterns I've seen many times before, and will get you a summary of what your agent does well and what it doesnt. Looking forward to your feedback! | ||
| ▲ | johnjudeh 8 hours ago | parent [-] | |
Thanks for sharing! It’s way easier to build an agent that can complete a task than to make sure it works across all the cases you care about. Especially when the output quality is really subjective | ||