Would this actually return memories and context? How could you know if parts or all of it were hallucinated?
You don't know that for sure for any output of these models.