| ▲ | gishh 2 days ago | |
A hallucination isn’t a creative new idea, it’s blatantly wrong information, provably. If an LLM had actual intellectual ability it could tell “us” how we can improve models. They can’t. They’re literally defined by their token count and they use statistics to generate token chains. They’re as creative as the most statistically relevant token chains they’ve been trained on by _people_ who actually used intelligence to type words on a keyboard. | ||