However, the undefined behaviours are specified and known about (or at least some people know about them). With LLMs, there's no way to know ahead of time that a particular prompt will lead to hallucinations.