| |
| ▲ | bob1029 5 days ago | parent [-] | | > You're not making any approximation that shortcuts any computational complexity results. I completely agree in the default/initial case. > (Such approximations exist in some cases, but that's not what you're doing here) However, the entire point of genetic programming is that once you find something that is "on the map" at all, you can begin to exploit the region around that program with some reasonably accurate expectations regarding how certain behaviors will unfold in the local space. The larger the population the more stable this tends to become on average. So, for pure exploration in GP, you do indeed suffer the full weight of the halting theorem. As you transition into the exploitation regime, this becomes more of a grey area. | | |
| ▲ | pron 5 days ago | parent [-] | | > So, for pure exploration in GP, you do indeed suffer the full weight of the halting theorem. As you transition into the exploitation regime, this becomes more of a grey area. I don't understand what the grey area is. The time hierarchy theorem (that we can consider the "generalised halting theorem") is a theorem; it is absolute. What we have here isn't something that uses some approximation that is true with some probability that we could maybe call a "grey area", but something that bears the full brunt of the theorem. That there are subsets of problems in some complexity class X that can be solved faster than X's upper limit is the very point of the complexity hierarchy. Yes, there are a subset of problems in EXPTIME that can be solved in polynomial time, hence the existence of the P class, but that doesn't mean that EXPTIME is a "grey area". If you're solving some problem quickly, then we know that what you've solved was not one of the hard problems to begin with. If you're solving some problems in polynomial time, then those problems are in P. For example, heuristic SAT solvers solve many SAT problems quickly. But their authors don't consider that evidence that P = NP, and understand that the instances that their solvers solve are "easy" (which is not to say they're not useful). In other words, what they're saying is that many useful SAT problems are easy, not that they're able to solve the hard problems efficiently. | | |
| ▲ | ComplexSystems 5 days ago | parent | next [-] | | I think the idea in the original post was to adjust the goalposts of the original halting problem to get something easier to solve. Instead of looking for programs that "eventually" halt while reproducing the required outputs, one can look for programs that halt "in some reasonable amount of time." The time-bounded halting problem is easier to solve than the original (it is decidable). As one increases the amount of time one views as "reasonable," one gets a sequence of time-bounded halting problems that can be viewed as successively good approximations of the original. There are many similarly "approximate" or "relaxed" versions of the halting problem that are "good enough" in real life and are perfectly decidable. | | |
| ▲ | pron 3 days ago | parent [-] | | Not only is there nothing here that approximates a "solution" to an unsolvable problem, but that there are more programs you can decide the more time you simulate them is the actual statement of generalised halting theorems. Indeed, this is literally how they are summarised: "Informally, these theorems say that given more time, a Turing machine can solve more problems." [1] When someone says that you can, in practice, sort-of get around the halting theorem by noting you can solve more problems given more time, do you see how it's not an approximation of a "solution" that gets around any theorem but the very point of the theorems? If you see you can solve more things in more time you are observing the halting theorem and its generalisations at work, not getting around them in practice. But that's not the end of things because now we can ask what it means to "get around" or "approximately get around" a theorem that basically says that you can buy a bike for $50 but you can't buy a car for that price. Saying, "I don't mind riding a bike" is clearly not getting around anything, but saying "most people wouldn't mind riding a bike" could, indeed, be a way to state that the limitations of the theorem don't affect many people in practice. But such a statement also isn't true. If someone wants to claim that in practice we rarely come across intractability where it matters, that would, indeed, be interesting, but I also don't think it's true. So if someone wants to say that most problems that are interesting in practice are decidable, that may well be true, but even Turing realised that decidability isn't a very interesting property. Decidability was replaced by tractability in the 70s, as halting was replaced by the more general and precise time hierarchy in 1965. We more precisely discovered that not only are there problems that cannot be solved in any time bound, but that there are problems that can be solved in 2^N that cannot be solved in N steps. And if someone wants to say that we don't frequently come across intractable problems in practice, then that is not true. If anything, the limits we run across in practice are more constrained than the known theoretical ones. Even though we have yet to prove a hierarchy between P and NP, and even between P and PSPACE, we do find it much harder, in practice, to solve problems that are complete in NP than problems known to be in P, and problems that are complete in PSPACE are harder in practice than those that are complete in NP. [1]: https://en.wikipedia.org/wiki/Time_hierarchy_theorem | | |
| ▲ | ComplexSystems 8 hours ago | parent [-] | | The halting problem can be approximated by a sequence of increasingly accurate computable functions - "partial halting oracles" which give the right answer on "many" inputs, with each better than the last. The sequences "converge to" or "approximate increasingly well" the true halting function in that for any input, there is some point in the sequence such that all subsequent partial halting oracles analyze its behavior correctly. The halting problem is "unsolvable" because the goalposts are very high. An algorithm to "decide" the halting problem can have no "failure modes" of any kind, even if the probability of failure is vanishingly small. It must work on every single program. As soon as you limit the scope of the programs you care about analyzing in any reasonable way, like "deterministic programs without randomness that use at most 32GB RAM," the proof no longer applies. The complexity classes you refer to don't conflict with any of this. In the general case, it is undecidable to analyze what complexity class an algorithm (or decision problem) is in, for instance, but this isn't usually summarized as "computational complexity analysis is an unsolvable problem." |
|
| |
| ▲ | hwayne 5 days ago | parent | prev [-] | | > If you're solving some problem quickly, then we know that what you've solved was not one of the hard problems to begin with. If you're solving some problems in polynomial time, then those problems are in P. I thought complexity classes applied to the problem definition, not the specific instances of the problem? An easy SAT problem instance doesn't belong in any complexity class, the set of SAT problem instances belongs to NP. | | |
| ▲ | pron 5 days ago | parent [-] | | Yes, we must parameterise problems to meaningfully talk about classes [1], but
interesting algorithms are interesting because they tractably give an answer to some large set of inputs. Their authors may not always know how to meaningfully parameterise the full set of inputs that they solve efficiently, but if they're interesting, then there's some implied set that could be parameterised. But this is less handwavy than this may sound because SAT solvers don't actually solve "the SAT problem" efficiently. Rather, they make guesses about true/false assignments to particular SAT variables, and the algorithm knows when a guess is successful and when it isn't, because when it makes a wrong guess it backtracks (i.e. it knowingly takes a step that could be said to be "exponential"). Over the entire set of SAT problems of a particular length, every guess will still be right some of the time and wrong (leading to a backtrack) some of the time. Yet the utility of these solvers comes from them making good guesses more than half the time. That means that there must be something special about the set of problems that they can solve with a smaller-than-expected number of backtracks. Put another way, every solver algorithm could be presented with specific instances, of any size, where the number of backtracks is small or large. [1]: This makes the notion of circuit complexity a challenging, hence the notion of "uniformity": https://en.wikipedia.org/wiki/Circuit_complexity#Uniformity |
|
|
|
|