| ▲ | root_axis 3 hours ago |
| More likely its just an LLM hallucination, not a real policy that Anthropic has. Unfortunately for them, it's a bad look to showcase one of the main failure modes of their product in their own business process. |
|
| ▲ | Henchman21 2 hours ago | parent [-] |
| If they've let their AI write the policy, and then they repeat that as policy, how exactly is this an "LLM hallucination" and not a real policy? |
| |
| ▲ | root_axis 33 minutes ago | parent | next [-] | | It's the same thing. Whether it was hallucinated upstream or in situ, the point is that it's not a real policy that the business adheres to, just something the LLM spat out. | | | |
| ▲ | teraflop 2 hours ago | parent | prev [-] | | It's both, isn't it? If the AI writes the policy and is also responsible for enforcing it (by handling tickets and acting as a gatekeeper for which issues are escalated to humans who can do something about them), then the hallucination becomes real. |
|