Right, if you want guarantees of anything you don't want statistical machine learning models.
In practice, I've found that the risk of LLMs hallucinating against well chosen context in low enough that I rarely worry about it.