| ▲ | fasterik 3 days ago | ||||||||||||||||
>The litmus test will be when, if ever, someone wins a Fields using a proof-assistant in an essential way. You're assuming that the point of interactive theorem provers is to discover new mathematics. While that's an interesting research area, it seems like the more practical application is verifying proofs one has already discovered through other means. | |||||||||||||||||
| ▲ | auntienomen 3 days ago | parent | next [-] | ||||||||||||||||
Exactly this. LLMs really aren't built for discovering new mathematics, especially _interesting_ new mathematics. They're built to try the most obvious patterns. When that works, it's pretty much by definition not interesting. What LLMs are good at is organizing concepts, filling in detail, and remembering to check corner cases. So their use should help mathematicians to get a better handle on what's terra firma and what's still exploration. Which is great. Proof by it-convinced-other-mathematicians doesn't have a flawless track record. Sometimes major theorems turn out to be wrong or wrong-as-stated. Sometimes they're right, but there's never been a complete or completely correct proof in the literature. The latter case is actually quite common, and formal proof is just what's needed. | |||||||||||||||||
| |||||||||||||||||
| ▲ | pfdietz 2 days ago | parent | prev [-] | ||||||||||||||||
Of course, once LLMs are really good at that, they can be set loose on the entire historical math literature, all 3.5M papers worth. And then LLMs can be trained on these formalized results (the ones that turn out upon attempted formalization to have been correct.) How good do you think AI will be at proving new results given that training set? Math is going to change, and change massively. There's a lot of whistling past the graveyard going on from those who are frightened by this prospect. | |||||||||||||||||