If this were to be an analogy to AI, would inference discover information that wasn't found during training? Is this where hallucinations come from?