▲ | frotaur 7 days ago | ||||||||||||||||
How do you know which question should be answered with 'I dont know?'. There are obvious questions which have no answer, but if only those are in the dataset, the model will answer I dont know only for unreasonable questions. To train this effectively you would need a dataset of questions which you know the model doesn't know. But if you have that... why not answer the question and put in the dataset so that the model will know ? That's a bit imprecise, but I think it capture the idea of why 'I don't know' answers are harder to train. | |||||||||||||||||
▲ | philipswood 6 days ago | parent | next [-] | ||||||||||||||||
I think one could add fake artificial knowledge - specifically to teach the network how to recognize "not knowing". | |||||||||||||||||
| |||||||||||||||||
▲ | simianwords 7 days ago | parent | prev [-] | ||||||||||||||||
but you just described how to fix the "i don't know" problems to "i know and the answer is <>". but not that "i don't know" is inherently hard to solve for some reason. | |||||||||||||||||
|