| ▲ | CC-Canary: Detect early signs of regressions in Claude Code(github.com) | |||||||||||||
| 27 points by tejpalv 4 hours ago | 8 comments | ||||||||||||||
| ▲ | evantahler 2 hours ago | parent | next [-] | |||||||||||||
I feel like asking the thing that you are measuring, and don’t trust, to measure itself might not produce the best measurements. | ||||||||||||||
| ||||||||||||||
| ▲ | Retr0id an hour ago | parent | prev | next [-] | |||||||||||||
What is "drift"? It seems to be one of those words that LLMs love to say but it doesn't really mean anything ("gap" is another one). | ||||||||||||||
| ||||||||||||||
| ▲ | aleksiy123 2 hours ago | parent | prev | next [-] | |||||||||||||
Interesting approach, I've been particularly interested in tracking and being able to understand if adding skills or tweaking prompts is making things better or worse. Anyone know of any other similar tools that allow you to track across harnesses, while coding? Running evals as a solo dev is too cost restrictive I think. | ||||||||||||||
| ||||||||||||||
| ▲ | wongarsu 2 hours ago | parent | prev [-] | |||||||||||||
See also https://marginlab.ai/trackers/claude-code-historical-perform... for a more conventional approach to track regressions This project is somewhat unconventional in its approach, but that might reveal issues that are masked in typical benchmark datasets | ||||||||||||||