▲ | squigz 2 days ago | |
> LLMs ability to separate facts from expression is quite well developed, maybe their strongest skill. There should presumably be data showing the reliability of LLMs' knowledge to be quite high, then? | ||
▲ | ndriscoll 2 days ago | parent [-] | |
I don't see how that follows. It can learn a false "fact" while not retaining the way that statement was expressed. It can also just make up facts entirely, which by definition then did not come from any training data. |