| ▲ | GolfPopper 2 hours ago | |
I have yet to see a comparison of human vs. LLM confabulation errors at scale. "Many small errors" makes a presumption about LLM confabulation/hallucination that seems unwarranted. Pre-LLM humans (and our computers) have managed vast nuclear arsenals, bioweapons research, and ubiquitous global transport - as a few examples - without any catastrophic mistakes, so far. What can we reasonably expect as a likely worst case scenario if LLMs replacing all the relevant expertise and execution? | ||