▲ | vrotaru 7 days ago | |||||||||||||||||||||||||||||||||||||
To some degree *all* LLM's answers are made up facts. For stuff that is abundantly present in training data those are almost always correct. For topics which are not common knowledge (allow for a great variability) you should always check. I've started to think of LLM's as a form lossy compression of available knowledge which when prompted produces "facts". | ||||||||||||||||||||||||||||||||||||||
▲ | devmor 7 days ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||
> I've started to think of LLM's as a form lossy compression of available knowledge which when prompted produces "facts". That is almost exactly what they are and what you should treat them as. A lossy compressed corpus of publicly available information with a weight of randomness. The most fervent skeptics like to call LLMs "autocorrect on steroids" and they are not really wrong. | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
▲ | vbezhenar 7 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||
Old Sci-Fi AI used to be an entity which have a hard facts database and was able to instantly search it. I think that's the right direction for modern AI to move. ChatGPT uses Google searches often. So replace Google with curated knowledge database, train LLM to consult this database for every fact and hallucinations will be gone. |