| I grossly underestimate the value of the time of highly educated people having to decode the arguments of another expert? Consider all the time saved if for each theorem proof pair, the proof was machine readable, you could let your computer verify the proclaimed proof as a precondition on studying it. That would save a lot of people a lot of time, and its not random peoples time saved, its highly educated peoples time being saved. That would allow much more novel research to happen with the same amount of expert-years. If population of planet A would use formal verification, and planet B refuses to, which planet do you predict will evolve faster |
| You appear to be deliberately ignoring the point. Currently, in 2025, it is not possible in most fields for a random expert to produce a machine checked proof. The work of everyone in the field coming together to create a machine checked proof is also more work for than for the whole field to learn an important result in the traditional way. This is a fixable problem. People are working hard on building up a big library of checked proofs, to serve as building blocks. We're working on having LLMs read a paper, and fill out a template for that machine checked proof, to greatly reduce the work. In fields where the libraries are built up, this is invaluable. But as a general vision of how people should be expected to work? This is more 2035, or maybe 2045, than 2025. That future is visible, but isn't here. |
| |
| ▲ | DoctorOetker 2 days ago | parent [-] | | It's interesting that you place it 10 or 20 years from now, given that MetaMath's initial release was... 20 years ago! So it's not really about the planets not being in the right positions yet. The roman empire lasted for centuries. If they wanted to do rigorous science, they could have built cars, helicopters, ... But they didn't (in Rome, do as the Romans do). This is not about the planets not being in the right position, but about Romans in Rome. | | |
| ▲ | btilly 2 days ago | parent [-] | | Let's see. I could believe you, an internet stranger. And believe that this problem was effectively solved 20 years ago. Or I could read Terry Tao's https://terrytao.wordpress.com/wp-content/uploads/2024/03/ma... and believe his experience that creating a machine checkable version of an informal proof currently takes something like 20x the work. And the machine checkable version can't just reference the existing literature, because most of that isn't in machine checkable form either. I'm going with the guy who is considered one of the greatest living mathematicians. There is an awful lot that goes into creating a machine checkable proof ecosystem, and the file format isn't the hard bit. | | |
| ▲ | DoctorOetker a day ago | parent [-] | | 20x the work of what? the work of staying vague? there is no upper limit to the "work" savings, why not be 5 times vaguer, then formal verification can be claimed to be 100x more work. If ultimate readership (over all future) were less than 20 per theorem, or whatever the vagueness factor would be, the current paradigm would be fine. If ultimate readership (not citation count) were higher than 20 per theorem, its a net societal loss to have the readers guess what the actual proof is, its collectively less effort for the author to formalize the theorem than it would be to have the readers guess the actual proof. As mathematicians both read and author proofs, they would save themselves time, or would be able to move the frontier of mathematics faster. From a taxpayer perspective, we should precondition mathematics funding (not publication) on machine readable proofs, this doesn't mean every mathematician would have to do it, if some hypothetical person had crazy good intuitions, and the rate of results high enough this person could hire people to formalize it for them, to meet the precondition. As long as the results are successfully formalized, this team could continue producing mathematics. |
|
|
|