Remix.run Logo
mark-r 4 days ago

We can't. I don't think the LLMs themselves can recognize when an answer is stale. They could if contradicting data was available, but their very existence suppresses the contradictory data.

zahlman 4 days ago | parent [-]

LLMs don't experience the world, so they have no reason a priori to know what is or isn't truthful in the training data.

(Not to mention the confabulation. Making up API method names is natural when your model of the world is that the method names you've seen are examples and you have no reason to consider them an exhaustive listing.)