| ▲ | andrewla 6 days ago |
| The example given doesn't seem right to me. > There is one problem, though, that I find easily explainable. Place a token at the bottom left corner of a grid that extends infinitely up and right, call that point (0, 0). You're given list of valid displacement moves for the token, like (+1, +0), (-20, +13), (-5, -6), etc, and a target point like (700, 1). You may make any sequence of moves in any order, as long as no move ever puts the token off the grid. Does any sequence of moves bring you to the target? If someone gives you such a sequence, it seems trivial to verify it in linear time. Even for arbitrary dimensions, and such witness can be verified in linear time. |
|
| ▲ | hwayne 5 days ago | parent | next [-] |
| To be in NP, witness must be verifiable in polynomial time with respect to the size of the original input. In this problem (VAS Reachability), the solution can be `2^2^2^...^K` steps long. Even if that's linear with respect to the witness, it's not polynomial with respect to the set of moves + goal. |
| |
| ▲ | andrewla 5 days ago | parent [-] | | Hmm.. I'd love to see a more formal statement of this, because it feels unintuitive. Notably the question "given a number as input, output as many 1's as that number" is exponential in the input size. Is this problem therefore also strictly NP-hard? | | |
| ▲ | hwayne 5 days ago | parent | next [-] | | > Hmm.. I'd love to see a more formal statement of this, because it feels unintuitive. The problem is called "Vector Addition System Reachability", and the proof that it's Ackermann-complete is here: https://arxiv.org/pdf/2104.13866 (It's actually for "Vector Addition Systems with states, but the two are equivalent formalisms. They're also equivalent to Petri nets, which is what got me interested in this problem in the first place!) > Notably the question "given a number as input, output as many 1's as that number" is exponential in the input size. Is this problem therefore also strictly NP-hard? (Speaking off the cuff, so there's probably a mistake or two here, computability is hard and subtle!) Compression features in a lot of NP-hard problems! For example, it's possible to represent some graphs as "boolean circuits", eg a function `f(a, b)` that's true iff nodes `a,b` have an edge between them. This can use exponentially less space, which means a normal NP-complete problem on the graph becomes NEXPTIME-complete if the graph is encoded as a circuit. IMO this is cheating, which is why I don't like it as an example. "Given K as input, output K 1's" is not a decision problem because it's not a boolean "yes/no". "Given K as input, does it output K ones" is a decision problem. But IIRC then for `K=3` your input is `(3, 111)` so it's still linear on the input size. I think. | | |
| ▲ | andrewla 5 days ago | parent | next [-] | | My point here more than anything else is that I find this formulation unsatisfying because it is "easy" to verify that we have a witness, but is exponential only because the size of the witness is exponential in size. What I'd like as a minimum example for "give me something that is NP-hard but not NP-complete" Is a problem whose input size is O(N), whose witnesses are O(N), but which requires O(e^N) time to validate that the witness is correct. I don't actually know that this is possible. | | | |
| ▲ | andrewla 5 days ago | parent | prev [-] | | I disagree with the formulation of "decision problem" here. The problem, properly phrased as a decision problem, is "For a given K, does there exist a string with K 1s". While it is straightforward to answer in the affirmative, and to produce an algorithm that produces that output (though it takes exponential time). To validate that a solution is in fact a valid solution also takes exponential time, if we are treating the size of the input as the base for the validation of the witness. |
| |
| ▲ | throwaway81523 5 days ago | parent | prev | next [-] | | In the definition of NP, you get the input in unary, so that gets rid of that exponential. The issue with VAS is that the number of moves required could be extremely large. There's a similar situation in chess (at least if generalized to N by N). You could assert Black has a forced win from a given position, but verifying the claim requires checking the entire game tree, rather than a succinct certificate. Generalized chess is in fact PSPACE-complete, which is generally believed to be outside of NP. | |
| ▲ | johntb86 5 days ago | parent | prev | next [-] | | It needs to be a decision problem (or easily recast as a decision problem).
"given a number as input, output as many 1's as that number" doesn't have a yes or no answer. You could try to ask a related question like "given a number as input and a list of 1s, are there as many 1s as the number?" but that has a very large input. | | |
| ▲ | andrewla 5 days ago | parent [-] | | Straightforward to pose as a decision problem -- you're given a sequence of {0,1} representing a base-2 input (N), output a string of {0, 1} such that there are N 1's. You could make it "N 1s followed by N 0s" to avoid being too trivial, but even in the original formulation there are plenty of "wrong" answers for each value of N, and determining whether a witness is correct requires O(2^b) time, where b is the number of bits in N. | | |
| ▲ | bubblyworld 5 days ago | parent | next [-] | | That's not a decision problem either. You need to cast it in the form of determining membership in some set of strings (the "language"). In this case the natural choice of language is the set of all strings of the form "{encoding of n}{1111....1 n times}", which is decidable in O(log(n)+n) steps, i.e. linear time. | |
| ▲ | 5 days ago | parent | prev [-] | | [deleted] |
|
| |
| ▲ | 5 days ago | parent | prev [-] | | [deleted] |
|
|
|
| ▲ | trixthethird 6 days ago | parent | prev | next [-] |
| Linear time in the length of the sequence, yes. But is the sequence length linear in dimension size, or number of moves given? Thats what is interesting. |
| |
| ▲ | andrewla 6 days ago | parent | next [-] | | Linear in the size of the witness, however many bits it takes to express it. | | |
| ▲ | trixthethird 5 days ago | parent [-] | | This applies to any computable problem though, no? At minimum the verifier has to read the witness. If we ignore PCPs and such.
The point here is that the witness grows very fast in terms of vector dimensionality and/or move set size. |
| |
| ▲ | bjornsing 6 days ago | parent | prev [-] | | It doesn’t need to be linear though. Polynomial is enough. But I guess it can be proven that the shortest possible sequence grows faster than polynomial. | | |
| ▲ | hwayne 5 days ago | parent | next [-] | | Proving it's nonpolynomial is pretty easy: just make the goal vector `(10, 10, 10)` and the displacement vectors `{(1, 0, 0), (-10, 1, 0), (-10, -10, 1)}`. It takes ~1000 steps to reach the goal, and if we add one more dimension we need 10x more steps. So it grows, at least, exponentially. Back in 1970ish someone found a problem that's at least double-exponential, proving it was 2-EXPTIME-hard. It was conjectured to be 2-EXPTIME-complete for the longest time, but turned out to be significantly harder. | |
| ▲ | trixthethird 6 days ago | parent | prev [-] | | I think they proved it grows with Ackermann function. | | |
| ▲ | bjornsing 5 days ago | parent [-] | | Sounds implausible… Number of computational steps to find the shortest sequence maybe grows with Ackermann function. But length of the shortest sequence? | | |
|
|
|
|
| ▲ | Choco31415 6 days ago | parent | prev | next [-] |
| A sequence is easy to verify. Choosing the sequence not so much. Roughly put that is the certificate definition of being in NP. |
| |
| ▲ | andrewla 6 days ago | parent [-] | | The goal here was to show that it was strictly NP-hard, i.e. harder than any problem in NP. | | |
| ▲ | Nevermark 5 days ago | parent [-] | | Harder to solve, not necessarily harder to verify? If I am understanding things right. | | |
| ▲ | andrewla 5 days ago | parent [-] | | The crazy thing about the definition of NP-completeness is that Cook's theorem says that all problems in NP can be reduced in polynomial time to an NP-complete problem. So if a witness to a problem can be verified in polynomial time, it is by definition in NP and can be reduced to an NP-complete problem. If I can verify a solution to this problem by finding a path in polynomial time, it is by definition in NP. The goal here was to present an example of a problem known to not be in NP. | | |
| ▲ | thaumasiotes 5 days ago | parent [-] | | > The crazy thing about the definition of NP-completeness is that Cook's theorem says that all problems in NP can be reduced in polynomial time to an NP-complete problem. What were you trying to say here? Cook's theorem says that SAT is NP-complete. "All problems in NP can be reduced in polynomial time to an NP-complete problem" is just a part of the definition of NP-completeness. |
|
|
|
|
|
| ▲ | quickthrower27 6 days ago | parent | prev [-] |
| [dead] |