Remix.run Logo
skybrian 2 days ago

> Users accustomed to receiving confident answers to virtually any question would likely abandon such systems rapidly.

Or maybe they would learn from feedback to use the system for some kinds of questions but not others? It depends on how easy it is to learn the pattern. This is a matter of user education.

Saying "I don't know" is sort of like an error message. Clear error messages make systems easier to use. If the system can give accurate advice about its own expertise, that's even better.

pton_xd 2 days ago | parent [-]

> Saying "I don't know" is sort of like an error message. Clear error messages make systems easier to use.

"I don't know" is not a good error message. "Here's what I know: ..." and "here's why I'm not confident about the answer ..." would be a helpful error message.

Then the question is, when it says "here's what I know, and here's why I'm not confident" -- is it telling the truth, or is that another layer of hallucination? If so, you're back to square one.

skybrian 2 days ago | parent [-]

Yeah, AI chatbots are notorious at not understanding their own limitations. I wonder how that could be fixed?

Terretta 18 hours ago | parent [-]

Easier to think about if LLM means Large Limitations Mirror.