| ▲ | koliber 2 days ago | |
Hmmm. Didn't think about that. In people there is a difference between unconscious hallucinations vs. intentional creativity. However, there might be situations where they're not distinguishable. In LLMs, it's hard to talk about intentionality. I love where you took this. | ||
| ▲ | gishh 2 days ago | parent [-] | |
A hallucination isn’t a creative new idea, it’s blatantly wrong information, provably. If an LLM had actual intellectual ability it could tell “us” how we can improve models. They can’t. They’re literally defined by their token count and they use statistics to generate token chains. They’re as creative as the most statistically relevant token chains they’ve been trained on by _people_ who actually used intelligence to type words on a keyboard. | ||