Remix.run Logo
adastra22 4 days ago

No? That is trivially not the case. Ask an LLM something outside its training data and it will hallucinate the answer. How can it do anything else? Maybe its hallucination ends up being correct, but not all of the time.