▲ | Evals in 2025: going beyond simple benchmarks to build models people can use(github.com) | |||||||
80 points by jxmorris12 7 days ago | 8 comments | ||||||||
▲ | andy99 4 days ago | parent | next [-] | |||||||
These can be useful for labs training models. I don't see them as particularly valuable for building AI systems. Real performance depends on how the system is built, much more so than the underlying LLM. Evaluating the system you build on relevant inputs is most important. Beyond that it would be nice to see benchmarks that give guidance on how and LLM should be used as a system component, not just which is "better" at something. | ||||||||
| ||||||||
▲ | 6Az4Mj4D 3 days ago | parent | prev | next [-] | |||||||
I see there are lots of courses being sold for Evals in Maven. Some are as costly as USD 3500. Are they worth it? https://maven.com/parlance-labs/evals | ||||||||
▲ | aplassard 4 days ago | parent | prev | next [-] | |||||||
I think cost should also be a direct consideration. Model performance varies wildly on benchmarks when given a budget. https://substack.com/@andrewplassard/note/p-173487568?r=2fqo... | ||||||||
| ||||||||
▲ | dustrider 4 days ago | parent | prev | next [-] | |||||||
Move beyond benchmarks… proceed to list a bunch of benchmarks. The problem for me is that it’s not worth running these myself, yeah I may pay attention to which model is better at tool calling. But what matters is how well it does at my use case. | ||||||||
▲ | gdiamos 4 days ago | parent | prev [-] | |||||||
How can the community tell if models overfit to these benchmarks? | ||||||||
|