Remix.run Logo
simianwords a day ago

No they don’t give false information often.

ziml77 a day ago | parent | next [-]

They do. To the point where I'm getting absolutely furious at work at the number of times shit's gotten fucked up and when I ask about how it went wrong the response starts with "ChatGPT said"

ipaddr a day ago | parent | prev | next [-]

Do you double check every fact or are you relying on yourself being an expert on the topics you ask an llm? If you are an expert on a topic you probably aren't asking ab llm anyhow.

It reminds me of someone who reads a newspaper article about a topic they know and say its most incorrect but then reading the rest of the paper and accepting those articles as fact.

tempest_ a day ago | parent [-]

Gell-Mann Amnesia

tempest_ a day ago | parent | prev | next [-]

I have them make up stuff constantly for smaller rust libraries that are newish or dont get a lot of use.

mythrwy a day ago | parent | prev | next [-]

"Often" is relative but they do give false information. Perhaps of greater concern is their confirmation bias.

That being said, I do agree with your general point. These tools are useful for exploring topics and answers, we just need to stay realistic about the current accuracy and bias (eager to agree).

mythrwy a day ago | parent | prev [-]

I just asked chatGPT.

"do llms give wrong information often?"

"Yes. Large language models produce incorrect information at a non-trivial rate, and the rate is highly task-dependent."

But wait, it could be lying and they actually don't give false information often! But if that were the case, it would then verify they give false information at a non trivial rate because I don't ask it that much stuff.