| ▲ | jmmcd 5 hours ago | |
"Pelican on bicycle" is one special case, but the problem (and the interesting point) is that with LLMs, they are always generalising. If a lab focussed specially on pelicans on bicycles, they would as a by-product improve performance on, say, tigers on rollercoasters. This is new and counter-intuitive to most ML/AI people. | ||
| ▲ | BoorishBears 2 hours ago | parent [-] | |
The gold standard for cheating on a benchmark is SFT and ignoring memorization. That's why the standard for quickly testing for benchmark contamination has always been to switch out specifics of the task. Like replacing named concepts with nonsense words in reasoning benchmarks. | ||