| ▲ | ComplexSystems 9 hours ago | |
> We've had automated theorem proving since the 60s. By that logic, we've had LLMs since the 60s! > What we need is automated theorem discovery. I don't see any reason you couldn't train a model to do this. You'd have to focus it on generating follow-up questions to ask after reading a corpus of literature, playing around with some toy examples in Python and making a conjecture out of it. This seems much easier than training it to actually complete an entire proof. > Erdős discovered these theorems even if he wasn't really able to prove them. Euler and Gauss discovered a ton of stuff they couldn't prove. It is weird that nobody considers this to be intelligence. Who says they don't? I wouldn't be surprised if HarmonicMath, DeepMind, etc have also thought about this kind of thing. > Automated axiom creation seems a lot harder. How is an LLM supposed to know that "between any two points there is a line" formalizes an important property of physical space? That's a good question! It would be interesting to see if this is an emergent property of multimodal LLMs trained specifically on this kind of thing. You would need mathematical reasoning, visual information and language encoded into some shared embedding space where similar things are mapped right next to each other geometrically. | ||
| ▲ | AdieuToLogic 40 minutes ago | parent | next [-] | |
>> We've had automated theorem proving since the 60s. > By that logic, we've had LLMs since the 60s! From a bit earlier[0], actually:
Were those "large"? I'm sure at the time they were thought to be so.0 - https://ai-researchstudies.com/history-of-large-language-mod... | ||
| ▲ | BanditDefender 9 hours ago | parent | prev [-] | |
[dead] | ||