Remix.run Logo
mannykannot 5 days ago

There is a way to state Parson's point which avoids this issue: hallucinations are just as much a consequence of the LLM working as designed as are correct statements.

throwawaymaths 5 days ago | parent [-]

fine. which part is the problem?

mannykannot 4 days ago | parent | next [-]

I suppose you are aware that, for many uses of LLMs, the propensity for hallucinating is a problem (especially if this is not properly taken into account by the people hoping to use these LLMs), but this then leaves me puzzled about what you are asking here.

johnnyanmac 5 days ago | parent | prev [-]

The part where it can't admit situations where there's not enough data/training to admit it doesn't know.

I'm a bit surprised no one talks about this factor. It's like talking to a giant narcissist who can Google really fast but not understand what it reads. The ability to admit ignorance is a major factor of credibility, because none of us know everything all at once.

throwawaymaths 5 days ago | parent [-]

yeah sorry i mean which part of the architecture. "working as designed"