| ▲ | ajross 7 hours ago | |||||||||||||||||||||||||||||||
> [...] are fiction machines. All they can do is hallucinate, and sometimes the hallucinations are useful. That alone rules them out, categorically, from any critical control loop. True, but no more true than it is if you replace the antecedent with "people". Saying that the tools make mistakes is correct. Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order. History is paved with people who got steamrollered by technology they didn't think would ever work. On a practical level AI seems very median in that sense. It's notable only because it's... kinda creepy, I guess. | ||||||||||||||||||||||||||||||||
| ▲ | solid_fuel 7 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||
> True, but no more true than it is if you replace the antecedent with "people". Incorrect. People are capable of learning by observation, introspection, and reasoning. LLMs can only be trained by rote example. Hallucinations are, in fact, an unavoidable property of the technology - something which is not true for people. [0] | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
| ▲ | fao_ 7 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
> Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order. It is, though. We have numerous studies on why hallucinations are central to the architecture, and numerous case studies by companies who have tried putting them in control loops! We have about 4 years of examples of bad things happening because the trigger was given to an LLM. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
| ▲ | 7 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||
| [deleted] | ||||||||||||||||||||||||||||||||