Remix.run Logo
ragibson 2 days ago

Yes, at least to some extent. The author mentions that the base model knows the answer to the switch puzzle but does not execute it properly here.

"It is worth noting that the instruction to "ignore internal knowledge" played a role here. In cases like the shutters puzzle, the model did seem to suppress its training data. I verified this by chatting with the model separately on AI Studio; when asked directly multiple times, it gave the correct solution significantly more often than not. This suggests that the system prompt can indeed mask pre-trained knowledge to facilitate genuine discovery."

hypron 2 days ago | parent [-]

My issue with this is that the LLM could just be roleplaying that it doesn't know.

jdiff 2 days ago | parent | next [-]

Of course it is. It's not capable of actually forgetting or suppressing its training data. It's just double checking rather than assuming because of the prompt. Roleplaying is exactly what it's doing. At any point, it may stop doing that and spit out an answer solely based on training data.

It's a big part of why search overview summaries are so awful. Many times the answers are not grounded in the material.

wavemode a day ago | parent [-]

It may actually have the opposite effect - the instruction to not use prior knowledge may have been what caused Gemini 3 to assume incorrect details about how certain puzzles worked and get itself stuck for hours. It knew the right answer (from some game walkthrough in its training data), but intentionally went in a different direction in order to pretend that it didn't know. So, paradoxically, the results of the test end up worse than if the model truly didn't know.

stavros a day ago | parent | prev | next [-]

Doesn't know what? This isn't about the model forgetting the training data, of course it can't do that any more than I can say "press the red button. Actually, forget that, press whatever you want" and have you actually forget what I said.

Instead, what can happen is that, like a human, the model (hopefully) disregards the instruction, making it carry (close to) zero weight.

brianwawok 2 days ago | parent | prev [-]

To test would just need to edit the rom and switch around the solution. Not sure how complicated that is, likely depends on the rom system.

Workaccount2 2 days ago | parent [-]

I don't know why people still get wrapped around the axle of "training data".

Basically every benchmark worth it's salt uses bespoke problems purposely tuned to force the models to reason and generalize. It's the whole point of ARC-AGI tests.

Unsurprisingly Gemini 3 pro performs way better on ARC-AGI than 2.5 pro, and unsurprisingly it did much better in pokemon.

The benchmarks, by design, indicate you can mix up the switch puzzle pattern and it will still solve it.