| ▲ | dbspin 4 hours ago | |
I'd consider hallucinations to be a fundamental flaw that currently sets hard limits on the current utility of LLMs in any context. | ||
| ▲ | SoftTalker 4 hours ago | parent [-] | |
I thought this for a while, but I've also been thinking about all the stupid, false stuff that actual humans believe. I'm not sure AI won't get to a point where even if it's not perfect it's no worse than people are about selectively observing policies, having wrong beliefs about things, or just making something up when they don't know. | ||