We can mitigate a lot of the problems with humans being non-deterministic by establishing trust and consequences
There are no consequences for a bad output from an LLM and idk about you but I don't trust them