|
| ▲ | Uehreka 3 days ago | parent | next [-] |
| If you fuzz the names they won’t mean the same thing anymore, and then it’s no longer the same test. If you remove the whitespace the LLM will just run a formatter on the code. It’s not like the LLM just loads in all the code and then starts appending its changes. |
| |
| ▲ | CuriouslyC 3 days ago | parent [-] | | I've never had a LLM try to run a formatter on my code with probably a few thousand hours logged driving agents (driving 4+ agents at once in most of those). Fuzzing makes the semantics slightly less immediately obvious, but LLMs are more robust to this than you or I, the biggest difference is the reduction in memorization carryover. If it feels like too different of a test for you, not sure what to tell you, but I know the world would appreciate a better way to test for training set contamination if you can figure one out. |
|
|
| ▲ | flare_blitz 3 days ago | parent | prev [-] |
| And your basis for saying this is...? |
| |
| ▲ | CuriouslyC 3 days ago | parent [-] | | I've done it? I have a benchmark called scramblebench that will do rewriting to evaluate model performance degradation with symbol replacement and layers of indirection. |
|