Remix.run Logo
Reverse math shows why hard problems are hard(quantamagazine.org)
58 points by gsf_emergency_6 4 hours ago | 7 comments
emil-lp an hour ago | parent | next [-]

Whenever the pigeon-hole principle is name dropped, we should also drop Dijkstra's commentary The undeserved status of the pigeon-hole principle.

https://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD109...

peacebeard 9 minutes ago | parent [-]

Wow. I thought the metaphor was stupid when I was taught this in college, and only now, decades later, I find out Dijkstra agreed.

degamad 3 hours ago | parent | prev | next [-]

Specifically, reverse math (a subset of metamathematics which looks at swapping axioms and theorems) allows us to show that some hard problems are equivalent to each other.

EDIT: I think this line is the most telling:

> But he cautioned that the reverse mathematics approach may be most useful for revealing new connections between theorems that researchers have already proved. "It doesn’t tell us much, as far as we can say, about the complexity of statements which we do not know how to prove."

So, at this point, it helps us understand more about problems we already understand a little about, but nothing yet about new problems.

gsf_emergency_6 2 hours ago | parent [-]

That's par for a field whose name seems to have been inspired by "reverse-engineering", which by construction doesn't try to understand products that have not yet reached the market :)

letmetweakit 22 minutes ago | parent | prev | next [-]

The Travelling Salesman Problem in 1 dimension, on a line, is trivial, I wonder what the connection is between the dimensions and the hardness of problems like this.

Bankq 3 hours ago | parent | prev [-]

The approach reminds me of NP-Completeness (Computational harness vs mathematical-proving hardness). Am I over-simplifying?

logician127 an hour ago | parent [-]

You’re right on the money. It’s intimately tied to computability theory, complexity theory’s more abstract sibling (I.e. the Halting problem and further Turing degrees). Both rely on the core techniques of diagonalization and reductions. The meat of it can differ a lot because estimating time bounds and determining logical equivalence rapidly become different problem spaces, so it’s not like results in one are really applicable to the other. But a researcher in one will usually be well versed in the other.