▲ | lblume 6 days ago | |
Empirically, there seems to be strong evidence for LLMs giving factual output for accessible knowledge questions. Many benchmarks test this. | ||
▲ | shadowgovt 6 days ago | parent [-] | |
Yes, but in the same sense that empirically, I can swim in the nearby river most days; the fact that the city has a combined stormdrain / sewer system that overflows to put feces in the river means that some days, the water I'd swim in is full of shit, and nothing about the infrastructure is guarding against that happening. I can tell you how quickly "swimmer beware" becomes "just stay out of the river" when potential E. coli infection is on the table, and (depending on how important the factuality of the information is) I fully understand people being similarly skeptical of a machine that probably isn't outputting shit, but has nothing in its design to actively discourage or prevent it. |