| ▲ | Henchman21 2 hours ago | ||||||||||||||||
If they've let their AI write the policy, and then they repeat that as policy, how exactly is this an "LLM hallucination" and not a real policy? | |||||||||||||||||
| ▲ | root_axis 35 minutes ago | parent | next [-] | ||||||||||||||||
It's the same thing. Whether it was hallucinated upstream or in situ, the point is that it's not a real policy that the business adheres to, just something the LLM spat out. | |||||||||||||||||
| |||||||||||||||||
| ▲ | teraflop 2 hours ago | parent | prev [-] | ||||||||||||||||
It's both, isn't it? If the AI writes the policy and is also responsible for enforcing it (by handling tickets and acting as a gatekeeper for which issues are escalated to humans who can do something about them), then the hallucination becomes real. | |||||||||||||||||