I don't see how that follows. It can learn a false "fact" while not retaining the way that statement was expressed. It can also just make up facts entirely, which by definition then did not come from any training data.