| ▲ | sublinear 9 hours ago | ||||||||||||||||||||||||||||
You might be joking, but you're probably also not that far off from reality. I think more people should question all this nonsense about AI "solving" math problems. The details about human involvement are always hazy and the significance of the problems are opaque to most. We are very far away from the sensationalized and strongly implied idea that we are doing something miraculous here. | |||||||||||||||||||||||||||||
| ▲ | johnfn 9 hours ago | parent | next [-] | ||||||||||||||||||||||||||||
I am kind of joking, but I actually don't know where the flaw in my logic is. It's like one of those math proofs that 1 + 1 = 3. If I were to hazard a guess, I think that tokens spent thinking through hard math problems probably correspond to harder human thought than tokens spend thinking through React issues. I mean, LLMs have to expend hundreds of tokens to count the number of r's in strawberry. You can't tell me that if I count the number of r's in strawberry 1000 times I have done the mental equivalent of solving an open math problem. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | famouswaffles 8 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
>The details about human involvement are always hazy and the significance of the problems are opaque to most. Not really. You're just in denial and are not really all that interested in the details. This very post has the transcript of the chat of the solution. | |||||||||||||||||||||||||||||
| ▲ | typs 9 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||
I mean the details are in the post. You can see the conversation history and the mathematician survey on the problem | |||||||||||||||||||||||||||||