| ▲ | fao_ 4 hours ago | |||||||
> Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order. It is, though. We have numerous studies on why hallucinations are central to the architecture, and numerous case studies by companies who have tried putting them in control loops! We have about 4 years of examples of bad things happening because the trigger was given to an LLM. | ||||||||
| ▲ | ajross 4 hours ago | parent [-] | |||||||
> We have numerous studies on why hallucinations are central to the architecture, And we have tens of thousands of years of shared experience of "People Were Wrong and Fucked Shit Up". What's your point? Again, my point isn't that LLMs are infallible; it's that they only need to be better than their competition, and their competition sucks. | ||||||||
| ||||||||