| ▲ | illuminator83 9 hours ago |
| Are you sure?
I've been confidently wrong about stuff before. Embarrassing, but it happens..
And I've been working with many people who are sometimes wrong about stuff too.
With LLMs you call that "hallucinating" and with people we just call it "lapse in memory", "error in judgment", or "being distracted", or plain "a mistake". |
|
| ▲ | fainpul 8 hours ago | parent | next [-] |
| True, but people can use classifier words like "I think …" or "Wasn't there this thing …", which allows you to judge their certainty about the answer. LLMs are always super confident and tell you how it is. Period. You would soon stop asking a coworker who repeatedly behaved like that. |
| |
| ▲ | illuminator83 7 hours ago | parent [-] | | Yeah, for the most part. But I've even had a few instance in which someone was very sure about something and still wrong. Usually not about APIs but rather about stuff that is more work to verify or not quite as timeless.
Cache optimization issue or suitability of certain algorithms for some problems even.
The world is changing a lot and sometimes people don't notice and stick to stuff that was state-of-the-art a decade ago. But I think the point of the article is that you should have measure in place which make hallucinations not matter because it will be noticed in CI and tests. |
|
|
| ▲ | whobre 8 hours ago | parent | prev [-] |
| It’s different. People don’t just invent random API that doesn’t exist. LLM does that all the time. |
| |
| ▲ | illuminator83 7 hours ago | parent [-] | | For the most part, yes. Because people usually read docs and test it on their own. But I remember a few people long ago telling me confidently how to do this or that in e.g. "git" only to find out during testing that it didn't quite work like that.
Or telling me about how some subsystem could be tested. When it didn't work like that at all.
Because they operated from memory instead of checking. Or confused one tool/system for another. LLMs can and should verify their assumptions too. The blog article is about that. That should keep most hallucinations and mistakes people make from doing any real harm. If you let an LLM do that it won't be much of a problem either.
I usually link an LLM to an online source for an API I want to use or tell it just look it up so it is less likely to make such mistakes. It helps. | | |
| ▲ | whobre 7 hours ago | parent [-] | | Again with people it is a rare occurrence. LLM does that regularly. I just can’t believe anything it says |
|
|