▲ | ComplexSystems 2 months ago | |||||||
The halting problem can be approximated by a sequence of increasingly accurate computable functions - "partial halting oracles" which give the right answer on "many" inputs, with each better than the last. The sequences "converge to" or "approximate increasingly well" the true halting function in that for any input, there is some point in the sequence such that all subsequent partial halting oracles analyze its behavior correctly. The halting problem is "unsolvable" because the goalposts are very high. An algorithm to "decide" the halting problem can have no "failure modes" of any kind, even if the probability of failure is vanishingly small. It must work on every single program. As soon as you limit the scope of the programs you care about analyzing in any reasonable way, like "deterministic programs without randomness that use at most 32GB RAM," the proof no longer applies. The complexity classes you refer to don't conflict with any of this. In the general case, it is undecidable to analyze what complexity class an algorithm (or decision problem) is in, for instance, but this isn't usually summarized as "computational complexity analysis is an unsolvable problem." | ||||||||
▲ | pron 2 months ago | parent [-] | |||||||
My point is that focusing on the halting theorem is silly, because we have much more precise and less binary generalisations of it since 1965. Finding "practical approximations" that are easier than the hard limits is not only not easy, but a huge deal. > The sequences "converge to" or "approximate increasingly well" the true halting function in that for any input, there is some point in the sequence such that all subsequent partial halting oracles analyze its behavior correctly. This is irrelevant. It is obviously the case that a "there exists X such that for all Y" is very different from "for all Y there exists X", yet the latter is by no means an "effective approximation" of the former. At the end of the day, we're always looking for some algorithm that is useful for a large class of inputs, and we know that any such algorithm cannot violate the time hierarchy in any way. It will be able to efficiently solve problems that are easy and unable to efficiently solve problems that are hard. Having any algorithm solve more problems will require it to run longer. It may be the case that a large set of practical problems are easy, but it is also the case that a large set of practical problems are hard. Only a world-changing discovery such that P = PSPACE or that there are tractable and useful approximations for all of PSPACE will change that. That doesn't mean, of course, that there may be many interesting easy problems that we're yet to solve. | ||||||||
|