▲ | danjc 2 days ago | |||||||
This is written by someone who has no idea how transformers actually work | ||||||||
▲ | ricksunny 2 days ago | parent | next [-] | |||||||
Contra: The piece’s first line cites OpenAI directly https://openai.com/index/why-language-models-hallucinate/ | ||||||||
| ||||||||
▲ | neuroelectron 2 days ago | parent | prev | next [-] | |||||||
Furthermore, if you simply try to push certain safety topics, you can see how actually can reduce hallucinations or at least make certain topics a hard line. They simply don't because agreeing with your pie-in-the-sky plans and giving you vague directions encourages users to engage and use the chatbot. If people got discouraged with answers like "it would take at least a decade of expertise..." or other realistic answers they wouldn't waste time fantasizing plans. | ||||||||
▲ | j_crick 2 days ago | parent | prev | next [-] | |||||||
> The way language models respond to queries – by predicting one word at a time in a sentence, based on probabilities Kinda tells all you need to know about the author in this regard. | ||||||||
▲ | progval 2 days ago | parent | prev [-] | |||||||
I don't know what to make of it. The author looks prolific in the field of ML, with 8 published articles (and 3 preprints) in 2025, but only one on LLMs specficially. https://scholar.google.com/citations?hl=en&user=AB5z_AkAAAAJ... |