▲ | naasking 3 days ago | |||||||||||||||||||||||||||||||
> I'm more reminded of Tom Scott's talk at the Royal Institution "There is no Algorithm for Truth"[0]. Isn't there? https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_induc... | ||||||||||||||||||||||||||||||||
▲ | zehaeva 3 days ago | parent | next [-] | |||||||||||||||||||||||||||||||
There are limits to such algorithms, as proven by Kurt Godel. https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_... | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | LegionMammal978 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
That Wikipedia article is annoyingly scant on what assumptions are needed for the philosophical conclusions of Solomonoff's method to hold. (For that matter, it's also scant on the actual mathematical statements.) As far as I can tell, it's something like "If there exists some algorithm that always generates True predictions (or perhaps some sequence of algorithms that make predictions within some epsilon of error?), then you can learn that algorithm in the limit, by listing through all algorithms by length and filtering them by which predict your current set of observations." But as mentioned, it's uncomputable, and the relative lack of success of AIXI-based approaches suggests that it's not even as well-approximable as advertised. Also, assuming that there exists no single finite algorithm for Truth, Solomonoff's method will never get you all the way there. | ||||||||||||||||||||||||||||||||
▲ | yubblegum 2 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||
> "computability and completeness are mutually exclusive: any complete theory must be uncomputable." This seems to be baked into our reality/universe. So many duals like this. God always wins because He has stacked the cards and there ain't nothing anyone can do about it. |