▲ | rybosworld a day ago | |||||||||||||||||||
Tuning the model output to perform better on certain prompts is not the same as improving the model. It's valid to worry that the model makers are gaming the benchmarks. If you think that's happening and you want to personally figure out which models are really the best, keeping some prompts to yourself is a great way to do that. | ||||||||||||||||||||
▲ | namaria 13 hours ago | parent | next [-] | |||||||||||||||||||
There is no guarantee for you that by keeping your questions to yourself that no one else has published something similar. This is bad reasoning all the way through. The problem is in trying to use a question as a benchmark. The only way to really compare models is to create a set of tasks of increasing compositional complexity and running the models you want to compare through them. And you'd have to come up with a new body of tasks each time a new model is published. Providers will always game benchmarks because they are a fixed target. If LLMs were developing general reasoning, that would be unnecessarily. The fact that providers do is evidence that there is no general reasoning, just second order overfitting (loss on token prediction does descend, but that doesn't prevent the 'reasoning loss' to be uncontrollable: cf. 'hallucinations'). | ||||||||||||||||||||
| ||||||||||||||||||||
▲ | ls612 a day ago | parent | prev [-] | |||||||||||||||||||
Who’s going out of their way to optimize for random HNers informal benchmarks? | ||||||||||||||||||||
|