▲ | zehaeva 3 days ago | |||||||
There are limits to such algorithms, as proven by Kurt Godel. https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_... | ||||||||
▲ | bigmadshoe 2 days ago | parent | next [-] | |||||||
You're really missing the points with LLMs and truth if you're appealing to Godel's Incompleteness Theorem | ||||||||
| ||||||||
▲ | naasking 2 days ago | parent | prev [-] | |||||||
True, and in the case of Solomonoff Induction, incompleteness manifests in the calculation of Kolmogorov complexity used to order programs. But what incompleteness actually proves is that there is no single algorithm for truth, but a collection of algorithms can make up for each other's weaknesses in many ways, eg. while no single algorithm can solve the halting problem, different algorithms can cover cases for which the others fail to prove a definitive halting result. I'm not convinced you can't produce a pretty robust system that produces a pretty darn good approximation of truth, in the limit. Incompleteness also rears its head in type inference for programming languages, but the cases for which it fails are typically not programs of any interest, or not programs that would be understandable to humans. I think the relevance of incompleteness elsewhere is sometimes overblown in exactly this way. | ||||||||
|