▲ | throwawaymaths 5 days ago | |||||||
fine. which part is the problem? | ||||||||
▲ | mannykannot 4 days ago | parent | next [-] | |||||||
I suppose you are aware that, for many uses of LLMs, the propensity for hallucinating is a problem (especially if this is not properly taken into account by the people hoping to use these LLMs), but this then leaves me puzzled about what you are asking here. | ||||||||
▲ | johnnyanmac 5 days ago | parent | prev [-] | |||||||
The part where it can't admit situations where there's not enough data/training to admit it doesn't know. I'm a bit surprised no one talks about this factor. It's like talking to a giant narcissist who can Google really fast but not understand what it reads. The ability to admit ignorance is a major factor of credibility, because none of us know everything all at once. | ||||||||
|