Remix.run Logo
btilly 2 days ago

Let's see.

I could believe you, an internet stranger. And believe that this problem was effectively solved 20 years ago.

Or I could read Terry Tao's https://terrytao.wordpress.com/wp-content/uploads/2024/03/ma... and believe his experience that creating a machine checkable version of an informal proof currently takes something like 20x the work. And the machine checkable version can't just reference the existing literature, because most of that isn't in machine checkable form either.

I'm going with the guy who is considered one of the greatest living mathematicians. There is an awful lot that goes into creating a machine checkable proof ecosystem, and the file format isn't the hard bit.

DoctorOetker a day ago | parent [-]

20x the work of what? the work of staying vague? there is no upper limit to the "work" savings, why not be 5 times vaguer, then formal verification can be claimed to be 100x more work.

If ultimate readership (over all future) were less than 20 per theorem, or whatever the vagueness factor would be, the current paradigm would be fine.

If ultimate readership (not citation count) were higher than 20 per theorem, its a net societal loss to have the readers guess what the actual proof is, its collectively less effort for the author to formalize the theorem than it would be to have the readers guess the actual proof. As mathematicians both read and author proofs, they would save themselves time, or would be able to move the frontier of mathematics faster. From a taxpayer perspective, we should precondition mathematics funding (not publication) on machine readable proofs, this doesn't mean every mathematician would have to do it, if some hypothetical person had crazy good intuitions, and the rate of results high enough this person could hire people to formalize it for them, to meet the precondition. As long as the results are successfully formalized, this team could continue producing mathematics.