| ▲ | hypron 2 days ago | |||||||
My issue with this is that the LLM could just be roleplaying that it doesn't know. | ||||||||
| ▲ | jdiff 2 days ago | parent | next [-] | |||||||
Of course it is. It's not capable of actually forgetting or suppressing its training data. It's just double checking rather than assuming because of the prompt. Roleplaying is exactly what it's doing. At any point, it may stop doing that and spit out an answer solely based on training data. It's a big part of why search overview summaries are so awful. Many times the answers are not grounded in the material. | ||||||||
| ||||||||
| ▲ | stavros a day ago | parent | prev | next [-] | |||||||
Doesn't know what? This isn't about the model forgetting the training data, of course it can't do that any more than I can say "press the red button. Actually, forget that, press whatever you want" and have you actually forget what I said. Instead, what can happen is that, like a human, the model (hopefully) disregards the instruction, making it carry (close to) zero weight. | ||||||||
| ▲ | brianwawok 2 days ago | parent | prev [-] | |||||||
To test would just need to edit the rom and switch around the solution. Not sure how complicated that is, likely depends on the rom system. | ||||||||
| ||||||||