▲ | redman25 11 hours ago | |
They can if they've been post trained on what they know and don't know. The LLM can first been given questions to test its knowledge and if the model returns a wrong answer, it can be given a new training example with an "I don't know" response. | ||
▲ | dingnuts 11 hours ago | parent [-] | |
Oh that's a great idea, just do that for every question the LLM doesn't know the answer to! That's.. how many questions? Maybe if one model generates all possible questions then |