| ▲ | Show HN: LLMs don't hallucinate because they're bad at math, it's the format(github.com) | |
| 2 points by yvonboulianne 11 hours ago | 2 comments | ||
| ▲ | ksaj 4 hours ago | parent | next [-] | |
404. At least someone got to read it 4 hours ago. Is this the same thing you meant to link to? https://github.com/yvonboulianne/laeka-brain | ||
| ▲ | gisanokharu 9 hours ago | parent | prev [-] | |
interesting take. curious what you mean by format specifically - is this about tokenization, autoregressive next-token prediction, or something else? my experience is that hallucinations are worse on sparse facts that require precise recall vs things that can be derived. the model knows it doesnt know, but the training pushes it to complete the sequence either way. is that what you mean or is this a different angle | ||