▲ | DrewADesign 6 days ago | |
This type of discrepancy seems to be all freaking over the place in the LLMscape. Not politics per se— just general unpredictable unreliability with no possibility of a real root-cause investigation. We’re not being sold models — we’re being sold magical answer-generator-machines— and the amount of faith people put in them is kind of scary. Let’s say someone didn’t know what happened in Tiananmen Square? How could you even know to push back? Or whether it was all hallucinated? Kids are using this for school. Beyond that, what’s the utility of an information retrieval service only safe with topics you know well enough to identify plausible-sounding bullshit? Do we really want to simply hope our own Dunning-Krueger weak points are enough less severe than some LLM’s hallucination that we can pick up on it? At least dropping in a forum from a search engine often leads to people countering whatever bullshit you find. I dunno. |