Remix.run Logo
fao_ 4 hours ago

> Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order.

It is, though. We have numerous studies on why hallucinations are central to the architecture, and numerous case studies by companies who have tried putting them in control loops! We have about 4 years of examples of bad things happening because the trigger was given to an LLM.

ajross 4 hours ago | parent [-]

> We have numerous studies on why hallucinations are central to the architecture,

And we have tens of thousands of years of shared experience of "People Were Wrong and Fucked Shit Up". What's your point?

Again, my point isn't that LLMs are infallible; it's that they only need to be better than their competition, and their competition sucks.

TheOtherHobbes 4 hours ago | parent [-]

It's a fine line. Humans don't always fuck shit up.

But human systems that don't fuck shit up are short-lived, rare, and fragile, and they've only become a potential - not a reality - in the last century or so.

The rest of history is mostly just endless horrors, with occasional tentative moments of useful insight.