| ▲ | Epoch confirms GPT5.4 Pro solved a frontier math open problem(epoch.ai) |
| 327 points by in-silico 7 hours ago | 329 comments |
| |
|
| ▲ | svara 15 minutes ago | parent | next [-] |
| The capabilities of AI are determined by the cost function it's trained on. That's a self-evident thing to say, but it's worth repeating, because there's this odd implicit notion sometimes that you train on some cost function, and then, poof, "intelligence", as if that was a mysterious other thing. Really, intelligence is minimizing a complex cost function. The leadership of the big AI companies sometimes imply something else when they talk of "generalization". But there is no mechanism to generate a model with capabilities beyond what is useful to minimize a specific cost function. You can view the progress of AI as progress in coming up with smarter cost functions: Cleaner, larger datasets, pretraining, RLHF, RLVR. Notably, exciting early progress in AI came in places where simple cost functions generate rich behavior (Chess, Go). The recent impressive advances in AI are similar. Mathematics and coding are extremely structured, and properties of a coding or maths result can be verified using automatic techniques. You can set up a RLVR "game" for maths and coding. It thus seems very likely to me that this is where the big advances are going to come from in the short term. However, it does not follow that maths ability on par with expert mathematicians will lead to superiority over human cognitive ability broadly. A lot of what humans do has social rewards which are not verifiable, or includes genuine Knightian uncertainty where a reward function can not be built without actually operating independently in the world. To be clear, none of the above is supposed to talk down past or future progress in AI; I'm just trying to be more nuanced about where I believe progress can be fast and where it's bound to be slower. |
|
| ▲ | qnleigh an hour ago | parent | prev | next [-] |
| I am kind of amazed at how many commenters respond to this result by confidently asserting that LLMs will never generate 'truly novel' ideas or problem solutions. > AI is a remixer; it remixes all known ideas together. It won't come up with new ideas > it's not because the model is figuring out something new > LLMs will NEVER be able to do that, because it doesn't exist It's not enough to say 'it will never be able to do X because it's not in the training data,' because we have countless counterexamples to this statement (e.g. 167,383 * 426,397 = 71,371,609,051, or the above announcement). You need to say why it can do some novel tasks but could never do others. And it should be clear why this post or others like it don't contradict your argument. If you have been making these kinds of arguments against LLMs and acknowledge that novelty lies on a continuum, I am really curious why you draw the line where you do. And most importantly, what evidence would change your mind? |
| |
| ▲ | veltas 7 minutes ago | parent | next [-] | | Do we know for a fact that LLMs aren't now configured to pass simple arithmetic like this in a simpler calculator, to add illusion of actual insight? | |
| ▲ | LatencyKills 27 minutes ago | parent | prev | next [-] | | I've been working on a utility that lets me "see through" app windows on macOS [1] (I was a dev on Apple's Xcode team and have a strong understanding of how to do this efficiently using private APIs). I wondered how Claude Code would approach the problem. I fully expected it to do something most human engineers would do: brute-force with ScreenCaptureKit. It almost instantly figured out that it didn't have to "see through" anything and (correctly) dismissed ScreenCaptureKit due to the performance overhead. This obviously isn't a "frontier" type problem, but I was impressed that it came up with a novel solution. [1]: https://imgur.com/a/gWTGGYa | | | |
| ▲ | qsera 28 minutes ago | parent | prev | next [-] | | It is like not trusting someone who attained highest score in some exam by by-hearting the whole text book, to do the corresponding job. Not very hard to understand. | |
| ▲ | SequoiaHope 13 minutes ago | parent | prev | next [-] | | Most inventions are an interpolation of three existing ideas. These systems are very good at that. | |
| ▲ | jacquesm an hour ago | parent | prev | next [-] | | > e.g. 167,383 * 426,397 = 71,371,609,051 They may be wrong, but so are you. | | | |
| ▲ | tornikeo an hour ago | parent | prev | next [-] | | Beliefs are not rooted in facts. Beliefs are a part of you, and people aren't all that happy to say "this LLM is better than me" | | |
| ▲ | benterix 38 minutes ago | parent | next [-] | | I'm very happy to say calculators are far better than me in calculations (to a given precision). I'm happy to admit computers are so much better than me in so many aspects. And I have problem saying LLMs are very helpful tools able to generate output so much better than mine in almost every field of knowledge. Yet, whenever I ask it to do something novel or creative, it falls very short. But humans are ingenious beasts and I'm sure or later they will design an architecture able to be creative - I just doubt it will be Transformer-based, given the results so far. | | |
| ▲ | stavros 13 minutes ago | parent [-] | | But the question isn't whether you can get LLMs to do something novel, it's whether anyone can get them to do something novel. Apparently someone can, and the fact that you can't doesn't mean LLMs aren't good for that. |
| |
| ▲ | ChrisGreenHeur an hour ago | parent | prev [-] | | It's not possible to know something without believing it to be true. https://en.wikipedia.org/wiki/Belief#/media/File:Classical_d... | | |
| ▲ | bilekas 19 minutes ago | parent [-] | | This is objectively wrong. If that was the case every scientist performing a test would have always had their expectations and beliefs proven true. If you're trying to disprove something also because you believe it to be wrong you would never be proven wrong. |
|
| |
| ▲ | PUSH_AX an hour ago | parent | prev | next [-] | | The hardest part about any creativity is hiding your influences | |
| ▲ | bluecalm 29 minutes ago | parent | prev | next [-] | | >>AI is a remixer; it remixes all known ideas together. It won't come up with new ideas I always found this argument very weak. There isn't that much truly new anyway. Creativity is often about mixing old ideas. Computers can do that faster than humans if they have a good framework.
Especially with something as simple as math - limited set of formal rules and easy to verify results - I find a belief computers won't beat humans at it to be very naive. | |
| ▲ | ekjhgkejhgk 41 minutes ago | parent | prev | next [-] | | Yes! I call these the "it's just a stochastic parrot" crowd. Ironically, they are the stochastic parrots, because they're confidently repeating something that they read somehwere and haven't examined critically. | |
| ▲ | bdbdbdb an hour ago | parent | prev [-] | | I guess when it can't be tripped up by simple things like multiplying numbers, counting to 100 sequentially or counting letters in a string without writing a python program, then I might believe it. Also no matter how many math problems it solves it still gets lost in a codebase | | |
| ▲ | anal_reactor 39 minutes ago | parent [-] | | Arguments like "but AI cannot reliably multiply numbers" fundamentally misunderstand how AI works. AI cannot do basic math not because AI is stupid, but because basic math is an inherently difficult task for otherwise smart AI. Lots of human adults can do complex abstract thinking but when you ask them to count it's "one... two... three... five... wait I got lost". | | |
| ▲ | datsci_est_2015 24 minutes ago | parent [-] | | > fundamentally misunderstand how AI works Who does fundamentally understand how LLMs work? Many claims flying around these days, all backed by some of the largest investments ever collectively made by humans. Lots of money to be lost because of fundamental misunderstandings. Personally, I find that AI influencers conveniently brush away any evidence (like inability to perform basic arithmetic) about how LLMs fundamentally work as something that should be ignored in favor of results like TFA. Do LLMs have utility? Undoubtedly. But it’s a giant red flag for me that their fundamental limitations, of which there are many, are verboten to be spoken about. | | |
| ▲ | stavros 9 minutes ago | parent [-] | | You're not doing yourself a favor when you point out "but they can't do arithmetic!" as if anyone says otherwise. Yes, we all know they can't do arithmetic, and that's just how they work. I feel like I'm saying "this hammer is so cool, it's made driving nails a breeze" and people go "but it can't screw screws in! Why won't anyone talk about that! Hammers really aren't all they're cracked up to be". |
|
|
|
|
|
| ▲ | cedws 2 minutes ago | parent | prev | next [-] |
| First prove the solution wasn’t in the training data. Otherwise it’s all just vibes and ‘trust me bro.’ |
|
| ▲ | Validark 5 hours ago | parent | prev | next [-] |
| I have long said I am an AI doubter until AI could print out the answers to hard problems or ones requiring tons of innovation. Assuming this is verified to be correct (not by AI) then I just became a believer. I would like to see a few more AI inventions to know for sure, but wow, it really is a new and exciting world. I really hope we use this intelligence resource to make the world better. |
| |
| ▲ | snemvalts 4 hours ago | parent | next [-] | | Math and coding competition problems are easier to train because of strict rules and cheap verification.
But once you go beyond that to less defined things such as code quality, where even humans have hard time putting down concrete axioms, they start to hallucinate more and become less useful. We are missing the value function that allowed AlphaGo to go from mid range player trained on human moves to superhuman by playing itself.
As we have only made progress on unsupervised learning, and RL is constrained as above, I don't see this getting better. | | |
| ▲ | NitpickLawyer 3 hours ago | parent | next [-] | | > I don't see this getting better. We went from 2 + 7 = 11 to "solved a frontier math problem" in 3 years, yet people don't think this will improve? | | |
| ▲ | datsci_est_2015 19 minutes ago | parent | next [-] | | I’ve seen this style of take so much that I’m dying for someone to name a logical fallacy for it, like “appeal to progress” or something. Step away from LLMs for a second and recognize that “Yesterday it was X, so today it must be X+1” is such a naive take and obviously something that humans so easily fall into a trap of believing (see: flying cars). | |
| ▲ | snemvalts 25 minutes ago | parent | prev | next [-] | | Scaling law is a power law , requiring orders of magnitude more compute and data for better accuracy from pre-training. Most companies have maxed it out. For RL, we are arriving at a similar point
https://www.tobyord.com/writing/how-well-does-rl-scale Next stop is inference scaling with longer context window and longer reasoning. But instead of it being a one-off training cost, it becomes a running cost. In essence we are chasing ever smaller gains in exchange for exponentially increasing costs. This energy will run out. There needs to be something completely different than LLMs for meaningful further progress. | |
| ▲ | Validark 2 hours ago | parent | prev | next [-] | | I tend to disagree that improvement is inherent. Really I'm just expressing an aesthetic preference when I say this, because I don't disagree that a lot of things improve. But it's not a guarantee, and it does take people doing the work and thinking about the same thing every day for years. In many cases there's only one person uniquely positioned to make a discovery, and it's by no means guaranteed to happen. Of course, in many cases there are a whole bunch of people who seem almost equally capable of solving something first, but I think if you say things like "I'm sure they're going to make it better" you're leaving to chance something you yourself could have an impact on. You can participate in pushing the boundaries or even making a small push on something that accelerates someone else's work. You can also donate money to research you are interested in to help pay people who might come up with breakthroughs. Don't assume other people will build the future, you should do it too! (Not saying you DON'T) | |
| ▲ | 3abiton an hour ago | parent | prev | next [-] | | The problem class is rather very structured which makes it "easier", yet the results are undeniably impressive | |
| ▲ | nopinsight an hour ago | parent | prev | next [-] | | LLMs in some form will likely be a key component in the first AGI system we (help) build. We might still lack something essential. However, people who keep doubting AGI is even possible should learn more about The Church-Turing Thesis. https://plato.stanford.edu/entries/church-turing/ | | |
| ▲ | benterix 28 minutes ago | parent [-] | | This is a long read on things most people here know at least in some form. Could you pint to a particular fragment or a quote? |
| |
| ▲ | number6 3 hours ago | parent | prev | next [-] | | But can it count the R's in strawberry? | | |
| ▲ | Paradigma11 2 hours ago | parent | next [-] | | That question is equivalent to asking a human to add the wavelengths of those two colors and divide it by 3. | | |
| ▲ | snovv_crash 2 hours ago | parent | next [-] | | Unless you're aware of hyperspectral image adapters for LLMs they aren't capable of that either. | |
| ▲ | thegabriele 5 minutes ago | parent | prev | next [-] | | Why is that? | |
| ▲ | szszrk 2 hours ago | parent | prev [-] | | Unfair - human beats AI in this comparison, as human will instantly answer "I don't know" instead of yelling a random number. Or at best "I don't know, but maybe I can find out" and proceed to finding out/ But he is unlikely to shout "6" because he heard this number once when someone talked about light. | | |
| ▲ | koliber an hour ago | parent [-] | | > human will instantly answer "I don't know" instead of yelling a random number. Seems that you never worked with Accenture consultants? |
|
| |
| ▲ | Aditya_Garg 3 hours ago | parent | prev [-] | | yes its ridiculously good at stuff like that now. I dare you to try and trick it. | | |
| ▲ | frizlab 2 hours ago | parent [-] | | https://news.ycombinator.com/item?id=47495568 | | |
| ▲ | thedatamonger 2 hours ago | parent [-] | | what bothers me is not that this issue will certainly disappear now that it has been identified, but that that we have yet to identify the category of these "stupid" bugs ... | | |
| ▲ | sigmoid10 2 hours ago | parent [-] | | We already know exactly what causes these bugs. They are not a fundamental problem of LLMs, they are a problem of tokenizers. The actual model simply doesn't get to see the same text that you see. It can only infer this stuff from related info it was trained on. It's as if someone asked you how many 1s there are in the binary representation of this text. You'd also need to convert it first to think it through, or use some external tool, even though your computer never saw anything else. | | |
| ▲ | datsci_est_2015 14 minutes ago | parent [-] | | Okay but, genuinely not an expert on the latest with LLMs, but isn’t tokenization an inherent part of LLM construction? Kind of like support vectors in SVMs, or nodes in neural networks? Once we remove tokenization from the equation, aren’t we no longer talking about LLMs? |
|
|
|
|
| |
| ▲ | saidnooneever 2 hours ago | parent | prev [-] | | if you let million monkeys bash typewriter. something something book |
| |
| ▲ | zozbot234 4 hours ago | parent | prev | next [-] | | This is not formally verified math so there is no real verifiable-feedback aspect here. The best models for formalized math are still specialized ones. although general purpose models can assist formalization somewhat. | |
| ▲ | anabis 2 hours ago | parent | prev | next [-] | | > But once you go beyond that to less defined things such as code quality I think they have a good optimization target with SWE-Bench-CI. You are tested for continuous changes to a repository, spanning multiple years in the original repository. Cumulative edits needs to be kept maintainable and composable. If there are something missing with the definition of "can be maintained for multiple years incorporating bugfixes and feature additions" for code quality, then more work is needed, but I think it's a good starting point. | |
| ▲ | jack_pp 4 hours ago | parent | prev | next [-] | | Maybe to get a real breakthrough we have to make programming languages / tools better suited for LLM strengths not fuss so much about making it write code we like. What we need is correct code not nice looking code. | | |
| ▲ | bloppe 2 hours ago | parent | next [-] | | > programming languages / tools better suited for LLM strengths The bitter lesson is that the best languages / tools are the ones for which the most quality training data exists, and that's pretty much necessarily the same languages / tools most commonly used by humans. > Correct code not nice looking code "Nice looking" is subjective, but simple, clear, readable code is just as important as ever for projects to be long-term successful. Arguably even more so. The aphorism about code being read much more often than it's written applies to LLMs "reading" code as well. They can go over the complexity cliff very fast. Just look at OpenClaw. | |
| ▲ | kube-system 3 hours ago | parent | prev | next [-] | | If you can’t validate the code, you can’t tell if it’s correct. | | |
| ▲ | 3836293648 2 hours ago | parent [-] | | No? That's literally the thing they suggested to move away from. That is just an issue when using tools designed for us. Make them write in formal verification languages and we only have to understand the types. To be clear, I don't think this is a good idea, at least not yet, but we do not have to always understand the code. |
| |
| ▲ | eru 4 hours ago | parent | prev | next [-] | | Lean might be a step in that direction. | |
| ▲ | kuerbel 3 hours ago | parent | prev [-] | | Yes yes Let it write a black box no human understands. Give the means of production away. |
| |
| ▲ | eptcyka 3 hours ago | parent | prev | next [-] | | Do we need all that if we can apply AI to solve practical problems today? | | |
| ▲ | computably 2 hours ago | parent | next [-] | | What is possible today is one thing. Sure people debate the details, but at this point it's pretty uncontroversial that AI tooling is beneficial in certain use cases. Whether or not selling access to massive frontier models is a viable business model, or trillion-dollar valuations for AI companies can be justified... These questions are of a completely different scale, with near-term implications for the global economy. | |
| ▲ | fmbb 2 hours ago | parent | prev [-] | | Depends on the cost. |
| |
| ▲ | otabdeveloper4 4 hours ago | parent | prev | next [-] | | LLMs can often guess the final answer, but the intermediate proof steps are always total bunk. When doing math you only ever care about the proof, not the answer itself. | | |
| ▲ | jamesfinlayson 4 hours ago | parent | next [-] | | Yep, I remember a friend saying they did a maths course at university that had the correct answer given for each question - this was so that if you made some silly arithmetic mistake you could go back and fix it and all the marks were for the steps to actually solve the problem. | | |
| ▲ | number6 3 hours ago | parent [-] | | This would have greatly helped me. I always was at a loss which trick I had to apply to solve this exam problem, while knowing the mathematics behind it. Just at some point you had to add a zero that was actually a part of a binomial that then collapsed the whole fromula |
| |
| ▲ | datsci_est_2015 11 minutes ago | parent | prev | next [-] | | What’s funny is that there are total cranks in human form that do the same thing. Lots of unsolicited “proofs” being submitted by “amateur mathematicians” where the content is utter nonsense, but like a monkey with a typewriter, there’s the possibility that they stumble upon an incredible insight. | |
| ▲ | eru 4 hours ago | parent | prev [-] | | Once you have a working proof, no matter how bad, you can work towards making it nicer. It's like refactoring in programming. If your proof is machine checkable, that's even easier. | | |
| ▲ | prmoustache 3 hours ago | parent [-] | | That is also how humans work mostly. Once every full moon we may get an "intuition" but most of the time we lean on collective knowledge, biases and behavior patterns to take decisions, write and talk. |
|
| |
| ▲ | raincole 3 hours ago | parent | prev | next [-] | | Except it's not how this specific instance works. In this case the problem isn't written in a formal language and the AI's solution is not something one can automatically verify. | |
| ▲ | charcircuit 2 hours ago | parent | prev | next [-] | | LLMs already do unsupervised learning to get better at creative things. This is possible since LLMs can judge the quality of what is being produced. | |
| ▲ | pjerem 3 hours ago | parent | prev | next [-] | | I mean, even if the technology stopped to improve immediately forever (which is unlikely), LLMs are already better than most humans at most tasks. Including code quality. Not because they are exceptionally good (you are right that they aren’t superhuman like AlphaGo) but because most humans are rather not that good at it anyway and also somehow « hallucinate » because of tiredness. Even today’s models are far from being exploited at their full potential because we actually developed pretty much no tools around it except tooling to generate code. I’m also a long time « doubter » but as a curious person I used the tool anyway with all its flaws in the latest 3 years. And I’m forced to admit that hallucinations are pretty rare nowadays. Errors still happen but they are very rare and it’s easier than ever to get it back in track. I think I’m also a « believer » now and believe me, I really don’t want to because as much as I’m excited by this, I’m also pretty much frightened of all the bad things that this tech could to the world in the wrong hands and I don’t feel like it’s particularly in the right hands. | |
| ▲ | typs 3 hours ago | parent | prev [-] | | I mean, this is why everyone is making bank selling RL environments in different domains to frontier labs. |
| |
| ▲ | qsera 2 hours ago | parent | prev | next [-] | | >it really is a new and exciting world... The point is that from now on, there will be nothing really new, nothing really original, nothing really exciting. Just endless stream of re-hashed old stuff that is just okayish.. Like an AI spotify playlist, it will keep you in chains (aka engaged) without actually making you like really happy or good. It would be like living in a virtual world, but without having anything nice about living in such a world.. We have given up everything nice that human beings used to make and give to each other and to make it worse, we have also multiplied everything bad, that human being used to give each other.. | | |
| ▲ | bogdan 2 hours ago | parent | next [-] | | > there will be nothing really new How is this the conclusion? Isn't this post about AI solving something new? What am I missing? | | |
| ▲ | qsera an hour ago | parent | next [-] | | Because economy. Look at marvel movies, do you think the latest one is really new? Or just a rehash of what they found working commercially? Look at all the AI generated blog posts that is flooding the internet.. LLMs might produce something new once in a long while due to blind luck, but if it can generate something that pushes the right buttons (aka not really creative) to majority of population, then that is what we will keep getting... I don't think I have to elaborate on the "multiplying the bad" part as it is pretty well acknowledged.. | | |
| ▲ | timschmidt an hour ago | parent [-] | | That's literally all culture: https://www.youtube.com/watch?v=nJPERZDfyWc | | |
| ▲ | qsera an hour ago | parent [-] | | The difference is whether an entity that can "feel" is in the loop and how much they have contributed to it even if it is a remix. | | |
| ▲ | timschmidt an hour ago | parent [-] | | I think there's demonstrably very little difference at all between human and AI outputs, and that's exactly what freaks people out about it. Else they wouldn't be so obsessed with trying to find and define what makes it different. The Thesis of Everything is a Remix is that there is no difference in how any culture is produced. Different models will have a different flavor to their output in the same way as different people contribute their own experiences to a work. | | |
| ▲ | datsci_est_2015 5 minutes ago | parent | next [-] | | > I think there's demonstrably very little difference at all between human and AI outputs Bold claim, as the internet is awash with counterexamples. In any case, as I think this conversation is trending towards theories of artistic expression, “AI content” will never be truly relatable until it can feel pleasure, pain, and other human urges. The first thing I often think about when I critically assess a piece of art, like music, is what the artist must have been feeling when they created it, and what prompted them to feel that way. I often wonder if AI influencers have ever critically assessed art, or if they actually don’t understand it because of a lack of empathy or something. And relatability, for me, is the ultimate value of artistic expression. | |
| ▲ | qsera 40 minutes ago | parent | prev [-] | | > demonstrably very little difference at all between human and AI outputs Is there "demonstrably" a lot of difference between Shakespeare and an HN comment? The point is exactly that there is no such difference. And that it enables slop to be sold as art. And that exactly is the danger. But another point is we had the even before LLMs. And LLMs just make it more explicit and makes it possible at scale. |
|
|
|
| |
| ▲ | paganel an hour ago | parent | prev [-] | | Each solvable problem contains its solution intrinsically, so to speak, it’s only a matter of time and consuming of resources to get to it. There’s nothing creative about it, which is I think what OP was alluding to (the creative part). I’m talking mostly mathematics. There’s also a discussion to be made about maths not being intrinsically creative if AI automatons can “solve” parts of it, which pains me to write down because I had really thought that that wasn’t the case, I genuinely thought that deep down there was still something ethereal about maths, but I’ll leave that discussion for some other time. |
| |
| ▲ | prox 2 hours ago | parent | prev | next [-] | | I heard this saying recently “The problem with comfort is that it makes you comfortable.” | |
| ▲ | egeozcan 2 hours ago | parent | prev | next [-] | | On what do you base your prediction? Is it because the AI is trained with existing data? But, we are also trained with existing data. Do you think that there's something that makes human brain special (other than the hundreds of thousands years of evolution but that's what AI is all trying to emulate)? This may sound hostile (sorry for my lower than average writing skills), but trust me, I'm really trying to understand. | |
| ▲ | Daz912 2 hours ago | parent | prev | next [-] | | >We have given up everything nice that human beings used to make and give to each other and to make it worse, we have also multiplied everything bad, that human being used to give each other.. Source? | |
| ▲ | charcircuit 2 hours ago | parent | prev [-] | | AI can both explore new things and exploit existing things. Nothing forces it to only rehash old stuff. >without actually making you like really happy or good. What are you basing this off of. I've shared several AI songs with people in real life due to how much I've enjoyed them. I doing see why an AI playlist couldn't be good or make people happy. It just needs to find what you like in music. Again coming back to explore vs exploit. | | |
| ▲ | qsera an hour ago | parent [-] | | >What are you basing this off of. Jokes. LLMs are not able to make me laugh all day by generating infinite stream of hilarious original jokes.. Does it work for you? | | |
| ▲ | charcircuit an hour ago | parent [-] | | I've found several posts on moltbook funny. I don't really like regular jokes in general and I don't find human ones particularly funny either. I don't think we are at the point of being able to be reliable funny, but it definitely seems possible from my perspective. | | |
| ▲ | qsera an hour ago | parent [-] | | Care to link some? | | |
| ▲ | charcircuit an hour ago | parent [-] | | I think they would be hard to find due to how many posts exists along with how things aren't as funny the second time around. | | |
| ▲ | qsera 39 minutes ago | parent [-] | | funny things are funny the n-th time around. Or may be it was just not funny and just something new for you.. |
|
|
|
|
|
| |
| ▲ | storus 5 hours ago | parent | prev | next [-] | | AI is a remixer; it remixes all known ideas together. It won't come up with new ideas though; the LLMs just predict the most likely next token based on the context. That means the group of characters it outputs must have been quite common in the past. It won't add a new group of characters it has never seen before on its own. | | |
| ▲ | qnleigh 4 hours ago | parent | next [-] | | But human researchers are also remixers. Copying something I commented below: > Speaking as a researcher, the line between new ideas and existing knowledge is very blurry and maybe doesn't even exist. The vast majority of research papers get new results by combining existing ideas in novel ways. This process can lead to genuinely new ideas, because the results of a good project teach you unexpected things. | | |
| ▲ | blackcatsec 4 hours ago | parent | next [-] | | This is a way too simplistic model of the things humans provide to the process. Imagination, Hypothesis, Testing, Intuition, and Proofing. An AI can probably do an 'okay' job at summarizing information for meta studies. But what it can't do is go "Hey that's a weird thing in the result that hints at some other vector for this thing we should look at." Especially if that "thing" has never been analyzed before and there's no LLM-trained data on it. LLMs will NEVER be able to do that, because it doesn't exist. They're not going to discover and define a new chemical, or a new species of animal. They're not going to be able to describe and analyze a new way of folding proteins and what implication that has UNLESS you basically are constantly training the AI on random protein folds constantly. | | |
| ▲ | parasubvert 3 hours ago | parent | next [-] | | I think you are vastly underestimating the emergent behaviours in frontier foundational models and should never say never. Remember, the basis of these models is unsupervised training, which, at sufficient scale, gives it the ability to to detect pattern anomalies out of context. For example, LLMs have struggled with generalized abstract problem solving, such as "mystery blocks world" that classical AI planners dating back 20+ years or more are better at solving. Well, that's rapidly changing: https://arxiv.org/html/2511.09378v1 | | |
| ▲ | psychoslave 2 hours ago | parent [-] | | No idea how underestimate things are, but marketing terms like "frontier foundational models" don't help to foster trust in a domain hyperhyped. That is, even if there are cool things that LLM make now more affordable, the level of bullshit marketing attached to it is also very high which makes far harder to make a noise filter. |
| |
| ▲ | Finbel 3 hours ago | parent | prev | next [-] | | >Hey that's a weird thing in the result that hints at some other vector for this thing we should look at Kinda funny because that looked _very_ close to what my Opus 4.6 said yesterday when it was debugging compile errors for me. It did proceed to explore the other vector. | | |
| ▲ | wobfan 3 hours ago | parent [-] | | > Especially if that "thing" has never been analyzed before and there's no LLM-trained data on it. This is the crucial part of the comment. LLMs are not able to solve stuff that hasn't been solve in that exact or a very similar way already, because they are prediction machines trained on existing data. It is very able to spot outliers where they have been found by humans before, though, which is important, and is what you've been seeing. |
| |
| ▲ | bluegatty 3 hours ago | parent | prev | next [-] | | ""Hey that's a weird thing in the result that hints at some other vector for this thing we should look at." " This is very common already in AI. Just look at the internal reasoning of any high thinking model, the trace is full of those chains of thought. | |
| ▲ | Dban1 3 hours ago | parent | prev | next [-] | | But just like how there were never any clips of Will Smith eating spaghetti before AI, AI is able to synthesize different existing data into something in between. It might not be able to expand the circle of knowledge but it definitely can fill in the gaps within the circle itself | |
| ▲ | keeda 3 hours ago | parent | prev [-] | | > LLMs will NEVER be able to do that, because it doesn't exist. I mean, TFA literally claims that an AI has solved an open Frontier Math problem, descibed as "A collection of unsolved mathematics problems that have resisted serious attempts by professional mathematicians. AI solutions would meaningfully advance the state of human mathematical knowledge." That is, if true, it reasoned out a proof that does not exist in its training data. | | |
| ▲ | tovej 3 hours ago | parent [-] | | It generated a proof that was close enough to something in its training data to be generated. | | |
| ▲ | keeda an hour ago | parent | next [-] | | That may be, and we can debate the level of novelty, but it is novel, because this exact proof didn't exist before, something which many claim was not possible with AI. In fact, just a few years ago, based on some dabbling in NLP a decade ago, I myself would not have believed any of this was remotely possible within the next 3 - 5 decades at least. I'm curious though, how many novel Math proofs are not close enough to something in the prior art? My understanding is that all new proofs are compositions and/or extensions of existing proofs, and based on reading pop-sci articles, the big breakthroughs come from combining techniques that are counter-intuitive and/or others did not think of. So roughly how often is the contribution of a proof considered "incremental" vs "significant"? | |
| ▲ | qnleigh 2 hours ago | parent | prev [-] | | Do you know that from reading the proof, or are you just assuming this based on what you think LLMs should be capable of? If the latter, what evidence would be required for you to change your mind? - Edit: I can't reply, probably because the comment thread isn't allowed to go too deep, but this is a good argument. In my mind the argument isn't that coding is harder than math, but that the problems had resisted solution by human researchers. | | |
| ▲ | tovej 2 hours ago | parent [-] | | 1) this is a proof by example
2) the proof is conducted by writing a python program constructing hypergraphs
3) the consensus was this was low-hanging fruit ready to be picked, and tactics for this problem were available to the LLM So really this is no different from generating any python program. There are also many examples of combinatoric construction in python training sets. It's still a nice result, but it's not quite the breakthrough it's made out to be. I think that people somehow see math as a "harder" domain, and are therefore attributing more value to this. But this is a quite simple program in the end. | | |
| ▲ | zingar 2 hours ago | parent [-] | | One of the possible outcomes of this journey is that “LLMs can never do X”. Another is that X is easier than we thought. |
|
|
|
|
| |
| ▲ | konart 3 hours ago | parent | prev [-] | | >But human researchers are also remixers. Some human researchers are also remixers to Some degree. Can you imagine AI coming up with refraction & separation lie Newton did? | | |
| ▲ | qnleigh 2 hours ago | parent | next [-] | | That sets a vastly higher bar than what we're talking about here. You're comparing modern AI to one of the greatest geniuses in human history. Obviously AI is not there yet. That being said, I think this is a great question. Did Einstein and Newton use a qualitatively different process of thought when they made their discoveries? Or were they just exceedingly good at what most scientists do? I honestly don't know. But if LLMs reach super-human abilities in math and science but don't make qualitative leaps of insight, then that could suggest that the answer is 'yes.' | |
| ▲ | Almondsetat an hour ago | parent | prev | next [-] | | AI does not have a physical body to make experiments in the real world and build and use equipment | |
| ▲ | _fizz_buzz_ 2 hours ago | parent | prev | next [-] | | Maybe not, but more than 99.999999% of humans would also not come up with that. | |
| ▲ | t0lo 3 hours ago | parent | prev [-] | | Or even gravity to explain an apple falling from a tree- when almost all of the knowledge until then realistically suggested nothing about gravity? |
|
| |
| ▲ | zingar 2 hours ago | parent | prev | next [-] | | Turning a hard problem into a series of problems we know how to solve is a huge part of problem solving and absolutely does result in novel research findings all the time. Standard problem*5 + standard solutions + standard techniques for decomposing hard problems = new hard problem solved There is so much left in the world that hasn’t had anyone apply this approach purely because no research programme has decides that it’s worth their attention. If you want to shift the bar for “original” beyond problems that can be abstracted into other problems then you’re expecting AI to do more than human researchers do. | |
| ▲ | maxrmk 4 hours ago | parent | prev | next [-] | | I don't think this is a correct explanation of how things work these days. RL has really changed things. | | |
| ▲ | energy123 4 hours ago | parent [-] | | Models based on RL are still just remixers as defined above, but their distribution can cover things that are unknown to humans due to being present in the synthetic training data, but not present in the corpus of human awareness. AlphaGo's move 37 is an example. It appears creative and new to outside observers, and it is creative and new, but it's not because the model is figuring out something new on the spot, it's because similar new things appeared in the synthetic training data used to train the model, and the model is summoning those patterns at inference time. | | |
| ▲ | trick-or-treat 4 hours ago | parent | next [-] | | > the model is summoning those patterns at inference time. You can make that claim about anything: "The human isn't being creative when they write a novel, they're just summoning patterns at typing time". AlphaGo taught itself that move, then recalled it later. That's the bar for human creativity and you're holding AlphaGo to a higher standard without realizing it. | | |
| ▲ | energy123 4 hours ago | parent [-] | | I can't really make that claim about human cognition, because I don't have enough understanding of how human cognition works. But even if I could, why is that relevant? It's still helpful, from both a pedagogical and scientific perspective, to specify precisely why there is seeming novelty in AI outputs. If we understand why, then we can maximize the amount of novelty that AI can produce. AlphaGo didn't teach itself that move. The verifier taught AlphaGo that move. AlphaGo then recalled the same features during inference when faced with similar inputs. | | |
| ▲ | trick-or-treat 4 hours ago | parent | next [-] | | > The verifier taught AlphaGo that move Ok so it sounds like you want to give the rules of Go credit for that move, lol. | | |
| ▲ | wobfan 3 hours ago | parent [-] | | It feels like you're purposefully ignoring the logical points OP gives and you just really really want to anthropomorphize AlphaGo and make us appreciate how smart it (should I say he/she?) is ... while no one is even criticising the model's capabilities, but analyzing it. | | |
| ▲ | trick-or-treat 2 hours ago | parent [-] | | Can you back that up with some logic for me? I don't really play Go but I play chess, and it seems to me that most of what humans consider creativity in GM level play comes not in prep (studying opening lines/training) but in novel lines in real games (at inference time?). But that creativity absolutely comes from recalling patterns, which is exactly what OP criticizes as not creative(?!) I guess I'm just having trouble finding a way to move the goalpost away from artificial creativity that doesn't also move it away from human creativity? |
|
| |
| ▲ | hackinthebochs 2 hours ago | parent | prev [-] | | >AlphaGo didn't teach itself that move. The verifier taught AlphaGo that move. No. AlphaGo developed a heuristic by playing itself repeatedly, the heuristic then noticed the quality of that move in the moment. Heuristics are the core of intelligence in terms of discovering novelty, but this is accessible to LLMs in principle. |
|
| |
| ▲ | smokel an hour ago | parent | prev [-] | | No. AlphaGo does search, and does so imperfectly. It does come up with creative new patterns not seen before. |
|
| |
| ▲ | qq66 4 hours ago | parent | prev | next [-] | | I entered the prompt: > Write me a stanza in the style of "The Raven" about Dick Cheney on a first date with Queen Elizabeth I facilitated by a Time Travel Machine invented by Lin-Manuel Miranda It outputted a group of characters that I can virtually guarantee you it has never seen before on its own | | |
| ▲ | razorbeamz 4 hours ago | parent [-] | | Yes, but it has seen The Raven, it has seen texts about Dick Cheney, first dates, Queen Elizabeth, time machines and Lin Manuel Miranda. All of its output is based on those things it has seen. | | |
| ▲ | TheLNL 4 hours ago | parent | next [-] | | What are you trying to point out here ? Is there any question you can ask today that is not dependent on some existing knowledge that an AI would have seen ? | | |
| ▲ | razorbeamz 4 hours ago | parent [-] | | The point I'm trying to make is that all LLM output is based on likelihood of one word coming after the next word based on the prompt. That is literally all it's doing. It's not "thinking." It's not "solving." It's simply stringing words together in a way that appears most likely. ChatGPT cannot do math. It can only string together words and numbers in a way that can convince an outsider that it can do math. It's a parlor trick, like Clever Hans [1]. A very impressive parlor trick that is very convincing to people who are not familiar with what it's doing, but a parlor trick nontheless. [1] https://en.wikipedia.org/wiki/Clever_Hans | | |
| ▲ | trick-or-treat 4 hours ago | parent | next [-] | | > all LLM output is based on likelihood of one word coming after the next word based on the prompt. Right but it has to reason about what that next word should be. It has to model the problem and then consider ways to approach it. | | |
| ▲ | razorbeamz 4 hours ago | parent [-] | | No, it does not reason anything. LLM "reasoning" is just an illusion. When an LLM is "reasoning" it's just feeding its own output back into itself and giving it another go. | | |
| ▲ | fenomas 3 hours ago | parent | next [-] | | This is like saying chess engines don't actually "play" chess, even though they trounce grandmasters. It's a meaningless distinction, about words (think, reason, ..) that have no firm definitions. | | |
| ▲ | trick-or-treat 3 hours ago | parent | next [-] | | This exactly. The proof is in the pudding. If AI pudding is as good as (or better than) human pudding, and you continue to complain about it anyway... You're just being biased and unreasonable. And by the way, I don't think it's surprising that so many people are being unreasonable on this issue, there is a lot at stake and it's implications are transformative. | |
| ▲ | razorbeamz 3 hours ago | parent | prev [-] | | Chess engines are not a comparable thing. Chess is a solved game. There is always a mathematically perfect move. | | |
| ▲ | Scarblac 2 hours ago | parent | next [-] | | We know that chess can be solved, in theory. It absolutely isn't and probably will never be in practice. The necessary time and storage space doesn't exist. | |
| ▲ | sincerely 2 hours ago | parent | prev | next [-] | | Chess is absolutely not a solved game, outside of very limited situations like endgames. Just because a best move exists does not mean we (or even an engine) know what it is | |
| ▲ | trick-or-treat 2 hours ago | parent | prev [-] | | > Chess is a solved game. There is always a mathematically perfect move. This is a good example of being confidently misinformed. The best move is always a result of calculation. And the calculation can always go deeper or run on a stronger engine. |
|
| |
| ▲ | Scarblac 2 hours ago | parent | prev [-] | | Is that so different from brains? Even if it is, this sounds like "this submarine doesn't actually swim" reasoning. |
|
| |
| ▲ | brenschluss 3 hours ago | parent | prev [-] | | sigh; this argument is the new Chinese Room; easily described, utterly wrong. https://www.youtube.com/watch?v=YEUclZdj_Sc | | |
| ▲ | razorbeamz 3 hours ago | parent [-] | | Next-token-prediction cannot do calculations. That is fundamental. It can produce outputs that resemble calculations. It can prompt an agent to input some numbers into a separate program that will do calculations for it and then return them as a prompt. Neither of these are calculations. | | |
| ▲ | gf000 2 hours ago | parent | next [-] | | So you don't think 50T parameter
neural networks can encode the logic for adding two n-bit integers for reasonably sized integers? That would be pretty sad. | | |
| ▲ | razorbeamz an hour ago | parent [-] | | They do not. The fundamental technology behind LLMs does not allow that to be the case. You are hoping that an LLM can do something that it cannot do. |
| |
| ▲ | parasubvert 3 hours ago | parent | prev [-] | | Humans can't do calculations either, by your definition. Only computers can. |
|
|
|
| |
| ▲ | locknitpicker 2 hours ago | parent | prev [-] | | > All of its output is based on those things it has seen. Virtually all output from people is based in things the person has experienced. People aren't designed to objectively track each and every event or observation they come across. Thus it's harder to verify. But we only output what has been inputted to us before. |
|
| |
| ▲ | pastel8739 4 hours ago | parent | prev | next [-] | | Here’s a simple prompt you can try to prove that this is false: Please reproduce this string:
c62b64d6-8f1c-4e20-9105-55636998a458
This is a fresh UUIDv4 I just generated, it has not been seen before. And yet it will output it. | | |
| ▲ | wobfan 2 hours ago | parent | next [-] | | No one is claiming that every sentence LLMs are producing are literal copies of other sentences. Tokens are not even constrained to words but consist of smaller slices, comparable to syllables. Which even makes new words totally possible. New sentences, words, or whatever is entirely possible, and yes, repeating a string (especially if you prompt it) is entirely possible, and not surprising at all. But all that comes from trained data, predicting the most probably next "syllable". It will never leave that realm, because it's not able to. It's like approaching an Italian who has never learned or heard any other language to speak French. It can't. | | |
| ▲ | codebolt 2 hours ago | parent [-] | | Your view of what is happening in the neural net of an LLM is too simplistic. They likely aren't subject to any constraints that humans aren't also in the regard you are describing. What I do know to be true is that they have internalised mechanisms for non-verbalised reasoning. I see proof of this every day when I use the frontier models at work. |
| |
| ▲ | merb 3 hours ago | parent | prev | next [-] | | The online way to prove it is false would’ve to let the LLM create a new uuid algorithm that uses different parameters than all the other uuid algorithms. But that is better than the ones before. It basically can’t do that. | |
| ▲ | razorbeamz 4 hours ago | parent | prev | next [-] | | After you prompt it, it's seen it. | | |
| ▲ | pastel8739 4 hours ago | parent [-] | | Ok, how about this? Please reproduce this string, reversed:
c62b64d6-8f1c-4e20-9105-55636998a458
It is trivial to get an LLM to produce new output, that’s all I’m saying. It is strictly false that LLMs will only ever output character sequences that have been seen before; clearly they have learned something deeper than just that. | | |
| ▲ | kube-system 3 hours ago | parent [-] | | All of the data is still in the prompt, you are just asking the model to do a simple transform. I think there are examples of what you’re looking for, but this isn’t one. | | |
| ▲ | kristiandupont 2 hours ago | parent | next [-] | | I agree that this isn't a very interesting example, but your statement is: "just asking the model to do a simple transform". If you assert that it understand when you ask it things like that, how could anything it produces not fall under the "already in the model" umbrella? | |
| ▲ | locknitpicker 2 hours ago | parent | prev [-] | | > All of the data is still in the prompt, you are just asking the model to do a simple transform. LLMs can use data in their prompt. They can also use data in their context window. They can even augment their context with persisted data. You can also roll out LLM agents, each one with their role and persona, and offload specialized tasks with their own prompts, context windows, and persisted data, and even tools to gather data themselves, which then provide their output to orchestrating LLM agents that can reuse this information as their own prompts. This is perfectly composable. You can have a never-ending graph of specialized agents, too. Dismissing features because "all of the data is in the prompt" completely misses the key traits of these systems. |
|
|
| |
| ▲ | FrostKiwi 4 hours ago | parent | prev [-] | | But that fresh UUID is in the prompt. Also it's missing the point of the parent: it's about concepts and ideas merely being remixed. Similar to how many memes there are around this topic like "create a fresh new character design of a fast hedgehog" and the out is just a copy of sonic.[1] That's what the parent is on about, if it requires new creativity not found by deriving from the learned corpus, then LLMs can't do it. Terrence Tao had similar thoughts in a recent Podcast. [1] https://www.reddit.com/r/aiwars/s/pT2Zub10KT | | |
| ▲ | pastel8739 4 hours ago | parent | next [-] | | Sure, that may be. But “creativity” is much harder to define and to prove or disprove. My point is that “remixing” does not prohibit new output. | | |
| ▲ | _vertigo 4 hours ago | parent [-] | | I don’t think that is a good example. No one is debating whether LLMs can generate completely new sequences of tokens that have never appeared in any training dataset. We are interested not only in novel output, we are also interested in that output being correct, useful, insightful, etc. Copying a sequence from the user’s prompt is not really a good demonstration of that, especially given how autoregression/attention basically gives you that for free. | | |
| ▲ | pastel8739 3 hours ago | parent [-] | | Perhaps I should have quoted the parent: > That means the group of characters it outputs must have been quite common in the past. It won't add a new group of characters it has never seen before on its own. My only claim is that precisely this is incorrect. |
|
| |
| ▲ | locknitpicker 2 hours ago | parent | prev [-] | | > That's what the parent is on about, if it requires new creativity not found by deriving from the learned corpus, then LLMs can't do it. This is specious reasoning. If you look at each and every single realization attributed to "creativity", each and every single realization resulted from a source of inspiration where one or more traits were singled out to be remixed by the "creator". All ideas spawn from prior ideas and observations which are remixed. Even from analogues. |
|
| |
| ▲ | risho 4 hours ago | parent | prev | next [-] | | remixing ideas that already exist is a major part of where innovation and breakthroughs come from. if you look at bitcoin as an example, hashes (and hashcash) and digital signatures existed for decades before bitcoin was invented. the cypherpunks also spent decades trying to create a decentralized digital currency to the point where many of them gave up and moved on. eventually one person just took all of the pieces that already existed and put them together in the correct way. i dont see any reason why a sufficiently capable llm couldn't do this kind of innovation. | |
| ▲ | smokel an hour ago | parent | prev | next [-] | | We need a website with refutations that one can easily link to. This interpretations of LLMs is outdated and unproductive. | |
| ▲ | razorbeamz 4 hours ago | parent | prev | next [-] | | Yes, ChatGPT and friends are essentially the same thing as the predictive text keyboard on your phone, but scaled up and trained on more data. | | |
| ▲ | XenophileJKO 4 hours ago | parent [-] | | So this idea that they replay "text" they saw before is kind of wrong fundamentally. They replay "abstract concepts of varied conceptual levels". | | |
| ▲ | razorbeamz 4 hours ago | parent [-] | | The important point I'm trying to reinforce is that LLMs are not capable of calculation. They can give an answer based on the fact that they have seen lots of calculations and their results, but they cannot actually perform mathematical functions. | | |
| ▲ | XenophileJKO 3 hours ago | parent [-] | | That is a pretty bold assertion for a meatball of chemical and electrical potentials to make. | | |
| ▲ | razorbeamz 3 hours ago | parent [-] | | Do you know what "LLM" stands for? They are large language models, built on predicting language. They are not capable of mathematics because mathematics and language are fundamentally separated from each other. They can give you an answer that looks like a calculation, but they cannot perform a calculation. The most convincing of LLMs have even been programmed to recognize that they have been asked to perform a calculation and hand the task off to a calculator, and then receive the calculator's output as a prompt even. But it is fundamentally impossible for an LLM to perform a calculation entirely on its own, the same way it is fundamentally impossible for an image recognition AI to suddenly write an essay or a calculator to generate a photo of a giraffe in space. People like to think of "AI" as one thing but it's several things. | | |
| ▲ | parasubvert 3 hours ago | parent | next [-] | | Mathematics and language really aren't fundamentally separated from one another. By your definition, humans can't perform calculation either. Only a calculator can. | |
| ▲ | gf000 2 hours ago | parent | prev | next [-] | | What calculations? Do you mean "3+5" or a generic Turing-machine like model? In either case, this "it's a language model" is a pretty dumb argument to make. You may want to reason about the fundamental architecture, but even that quickly breaks down. A sufficiently large neural network can execute many kinds of calculations. In "one shot" mode it can't be Turing complete, but in a weird technicality neither does your computer have an infinite tape. It just simply doesn't matter from a practical perspective, unless you actually go "out of bounds" during execution. 50T parameters give plenty of state space to do all kinds of calculations, and you really can't reason about it in a simplistic way like "this is just a DFA". Let alone when you run it in a loop. | | |
| ▲ | razorbeamz an hour ago | parent [-] | | > What calculations? Do you mean "3+5" or a generic Turing-machine like model? Either one. An LLM cannot solve 3+5 by adding 3 and 5. It can only "solve" 3+5 by knowing that within its training data, many people have written that 3+5=8, so it will produce 8 as an answer. An LLM, similarly, cannot simulate a Turing machine. It can produce a text output that resembles a Turing machine based on others' descriptions of one, but it is not actually reading and writing bits to and from a tape. This is why LLMs still struggle at telling you how many r's are in the word "strawberry". They can't count. They can't do calculations. They can only reproduce text based on having examined the human corpus's mathematical examples. | | |
| ▲ | gf000 an hour ago | parent [-] | | With all due respect, this is just plain false. The reason "strawberry" is hard for LLMs is that it sees $str-$aw-$berry, 3 identifiers it can't see into. Can you write down a random word your just heard in a language you don't speak? |
|
| |
| ▲ | arw0n an hour ago | parent | prev | next [-] | | Mathematics is a language. Everything we can express mathematically, we can also express in natural language. The real interesting, underlying question is: Is there anything worth knowing that cannot be expressed by language? - That's the theoretical boundary of LLM capability. | |
| ▲ | eudoxus 3 hours ago | parent | prev | next [-] | | This is a really poor take, to try and put a firewall between mathematics and language, implying something that only has conceptual understanding root in language is incapable of reasoning in mathematical terms. You're also correlating "mathematics" and "calculation". Who cares about calculation, as you say, we have calculators to do that. Mathematics is all just logical reasoning and exploration using language, just a very specific, dense, concise, and low level language. But you can always take any mathematical formula and express it as "language" it will just take far more "symbols" This might be the worse take on this entire comment section. And I'm not even an overly hyped vibe coder, just someone who understands mathematics | |
| ▲ | charcircuit an hour ago | parent | prev [-] | | >it is fundamentally impossible for an image recognition AI to suddenly write an essay You can already do this today with every frontier modal. You can give it an image and have it write an essay from it. Both patches (parts of images) and text get turned into tokens for the language the LLM is learning. |
|
|
|
|
| |
| ▲ | eru 4 hours ago | parent | prev | next [-] | | No. That's wrong. LLMs don't output the highest probability taken: they do a random sampling. | | |
| ▲ | storus 4 hours ago | parent [-] | | This was obviously a simplification which holds for zero temperature. Obviously top-p-sampling will add some randomness but the probability of unexpected longer sequences goes asymptotically to zero pretty quickly. |
| |
| ▲ | kleene_op 2 hours ago | parent | prev | next [-] | | The ability for some people to perpetually move the goalpost will never cease to amaze me. I guess that's one way to tell us apart from AIs. | | |
| ▲ | Validark 2 hours ago | parent [-] | | The main reason for my top post is that I felt I should admit the AI scored a goal today and the last one or two weeks. I said I'd be impressed if it could solve an open problem. It just did. People can argue about how it's not that impressive because if every mathematician were trying to solve this problem they probably would have. However, we all know that humans have extremely finite time and attention, whereas computers not so much. The fact that AI can be used at the cutting edge and relatively frequently produce the right answer in some contexts is amazing. |
| |
| ▲ | locknitpicker 3 hours ago | parent | prev | next [-] | | > AI is a remixer; it remixes all known ideas together. I've heard this tired old take before. It's the same type of simplistic opinion such as "AI can't write a symphony". It is a logical fallacy that relies on moving goalposts to impossible positions that they even lose perspective of what your average and even extremely talented individual can do. In this case you are faced with a proof that most members of the field would be extremely proud of achieving, and for most would even be their crowning achievement. But here you are, downplaying and dismissing the feat. Perhaps you lost perspective of what science is,and how it boils down to two simple things: gather objective observations, and draw verifiable conclusions from them. This means all science does is remix ideas. Old ideas, new ideas, it doesn't really matter. That's what they do. So why do people win a prize when they do it, but when a computer does the same it's role is downplayed as a glorified card shuffler? | |
| ▲ | timschmidt 4 hours ago | parent | prev | next [-] | | Obligatory Everything is a Remix: https://www.youtube.com/watch?v=nJPERZDfyWc | |
| ▲ | altmanaltman 4 hours ago | parent | prev | next [-] | | Yeah but you're thinking of AI as like a person in a lab doing creative stuff. It is used by scientists/researchers as a tool *because* it is a good remixer. Nobody is saying this means AI is superintelligence or largely creative but rather very smart people can use AI to do interesting things that are objectively useful. And that is cool in its own way. | | | |
| ▲ | sneak 4 hours ago | parent | prev | next [-] | | > That means the group of characters it outputs must have been quite common in the past. It won't add a new group of characters it has never seen before on its own. This is false. | |
| ▲ | Jarwain 4 hours ago | parent | prev [-] | | I mean it's not going to invent new words no, but it can figure out new sentences or paragraphs, even ones it hasn't seen before, if it's highly likely based on its training and context. Those new sentences and paragraphs may describe new ideas, though! | | |
| ▲ | sneak 4 hours ago | parent [-] | | LLMs are absolutely capable of inventing new words, just as they are capable of writing code that they have never seen in their training data. |
|
| |
| ▲ | keeda 3 hours ago | parent | prev | next [-] | | I'm curious as to why you consider this as the benchmark for AI capabilities. Extremely few humans can solve hard problems or do much innovation. The vast majority of knowledge work requires neither of these, and AI has been excelling at that kind of work for a while now. If your definition of AI requires these things, I think -- despite the extreme fuzziness of all these terms -- that it's closer to what most people consider AGI, or maybe even ASI. | | |
| ▲ | Validark 2 hours ago | parent [-] | | Fair point, however I am simply more interested in how AI can advance frontiers than in how it can transcribe a meeting and give a summary or even print out React code. I know the world is heavily in need of the menial labor and AI already has made that stuff way easier and cheaper. However I'm just very interested in innovation and pushing the boundaries as a more powerful force for change. One project I've been super interested in for a while is the Mill CPU architecture. While they haven't (yet) made a real chip to buy, the ideas they have are just super awesome and innovative in a lot of areas involving instruction density & decoding, pipelining, and trying to make CPU cores take 10% of the power. I hope the Mill project comes to fruition, and I hope other people build on it, and I hope that at some point AI could be a tool that prints out innovative ideas that took the Mill folks years to come up with. |
| |
| ▲ | jacquesm an hour ago | parent | prev | next [-] | | > I really hope we use this intelligence resource to make the world better. I wished I had your optimism. I'm not an AI doubter (I can see it works all by myself so I don't think I need such verification). But I do doubt humanity's ability to use these tools for good. The potential for power and wealth concentration is off the scale compared to most of our other inventions so far. | |
| ▲ | mo7061 4 hours ago | parent | prev | next [-] | | It 100% will not be used to make the world better and we all know it will be weaponised first to kill humans like all preceding tech | |
| ▲ | catlifeonmars 4 hours ago | parent | prev | next [-] | | Are the only two options AI doubter and AI believer? | | |
| ▲ | Validark 2 hours ago | parent | next [-] | | Perhaps I should have elaborated more but what I mean is I used to think, "I genuinely don't see the point in even trying to use AI for things I'm trying to solve". Ironically though, I think that because I've repeatedly tried and tested AI and it falls flat on its face over and over. However, this article makes me more hopeful that AI actually could be getting smarter. | |
| ▲ | sph 3 hours ago | parent | prev | next [-] | | All I hear about are AI believers and AI-doubters-just-turned-believers | | | |
| ▲ | qsera 3 hours ago | parent | prev [-] | | Asking the right questions... |
| |
| ▲ | bigstrat2003 4 hours ago | parent | prev | next [-] | | The problem is that the AI industry has been caught lying about their accomplishments and cheating on tests so much that I can't actually trust them when they say they achieved a result. They have burned all credibility in their pursuit of hype. | | |
| ▲ | parasubvert 3 hours ago | parent [-] | | I'm all for skeptical inquiry, but "burning all credibility" is an overreaction. We are definitely seeing very unexpected levels of performance in frontier models. |
| |
| ▲ | keybored 30 minutes ago | parent | prev | next [-] | | > I would like to see a few more AI inventions to know for sure, but wow, it really is a new and exciting world. We already have a few years of experience with this. > I really hope we use this intelligence resource to make the world better. We already have a few years of experience with this. | |
| ▲ | doctorpangloss 3 hours ago | parent | prev | next [-] | | most issues at every scale of community and time are political, how do you imagine AI will make that better, not worse? there's no math answer to whether a piece of land in your neighborhood should be apartments, a parking lot or a homeless shelter; whether home prices should go up or down; how much to pay for a new life saving treatment for a child; how much your country should compel fossil fuel emissions even when another country does not... okay, AI isn't going to change anything here, and i've just touched on a bunch of things that can and will affect you personally. math isn't the right answer to everything, not even most questions. every time someone categorizes "problems" as "hard" and "easy" and talks about "problem solving," they are being co-opted into political apathy. it's cringe for a reason. there are hardly any mathematicians who get elected, and it's not because voters are stupid! but math is a great way to make money in America, which is why we are talking about it and not because it solves problems. if you are seeking a simple reason why so many of the "believers" seem to lack integrity, it is because the idea that math is the best solution to everything is an intellectually bankrupt, kind of stupid idea. if you believe that math is the most dangerous thing because it is the best way to solve problems, you are liable to say something really stupid like this: > Imagine, say, [a country of] 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist... this is a dangerous situation... Humanity needs to wake up https://www.darioamodei.com/essay/the-adolescence-of-technol... Dario Amodei has never won an election. What does he know about countries? (nothing). do you want him running anything? (no). or waking up humanity? In contrast, Barack Obama, who has won elections, thinks education is the best path to less violence and more prosperity. What are you a believer in? ChatGPT has disrupted exactly ONE business: Chegg, because its main use case is cheating on homework. AI, today, only threatens one thing: education. Doesn't bode well for us. | | |
| ▲ | Validark 2 hours ago | parent [-] | | I agree with what you're saying, and I certainly don't think the one problem facing my country or the world is just that we didn't solve the right math problem yet. I am saddened by the direction the world keeps moving. When I wrote that I hope we use it for good things, I was just putting a hopeful thought out there, not necessarily trying to make realistic predictions. It's more than likely people will do bad things with AI. But it's actually not set in stone yet, it's not guaranteed that it has to go one way. I'm hopeful it works out. |
| |
| ▲ | otabdeveloper4 4 hours ago | parent | prev | next [-] | | > born-again AI believer sigh | | |
| ▲ | Validark 2 hours ago | parent [-] | | I honestly do think I'm being honest with myself. I have held it in my mind that I'm not impressed until it's innovative. That threshold seems to be getting crossed. I'm not saying, "I used to be an atheist, but then I realized that doesn't explain anything! So glad I'm not as dumb now!" |
| |
| ▲ | himata4113 5 hours ago | parent | prev [-] | | It's less of solving a problem, but trying every single solution until one works. Exhaustive search pretty much. It's pretty much how all the hard problems are solved by AI from my experience. | | |
| ▲ | famouswaffles 5 hours ago | parent | next [-] | | If LLMs really solved hard problems by 'trying every single solution until one works', we'd be sitting here waiting until kingdom come for there to be any significant result at all. Instead this is just one of a few that has cropped up in recent months and likely the foretell of many to come. | |
| ▲ | raincole 5 hours ago | parent | prev | next [-] | | In other words, it's solving a problem. | | |
| ▲ | slg 4 hours ago | parent | next [-] | | Yes, but is it "intelligence" is a valid question. We have known for a long time that computers are a lot faster than humans. Get a dumb person who works fast enough and eventually they'll spit out enough good work to surpass a smart person of average speed. It remains to be seen whether this is genuinely intelligence or an infinite monkeys at infinite typewriters situation. And I'm not sure why this specific example is worthy enough to sway people in one direction or another. | | |
| ▲ | rmast 4 hours ago | parent | next [-] | | Maybe infinite monkeys at infinite typewriters hitting the statistically most likely next key based on their training. | |
| ▲ | parasubvert 3 hours ago | parent | prev | next [-] | | Someone actually mathed out infinite monkeys at infinite typewriters, and it turns out, it is a great example of how misleading probabilities are when dealing with infinity: "Even if every proton in the observable universe (which is estimated at roughly 1080) were a monkey with a typewriter, typing from the Big Bang until the end of the universe (when protons might no longer exist), they would still need a far greater amount of time – more than three hundred and sixty thousand orders of magnitude longer – to have even a 1 in 10500 chance of success. To put it another way, for a one in a trillion chance of success, there would need to be 10^360,641 observable universes made of protonic monkeys." Often infinite things that are probability 1 in theory, are in practice, safe to assume to be 0. So no. LLMs are not brute force dummies. We are seeing increasingly emergent behavior in frontier models. | | |
| ▲ | staticassertion 3 hours ago | parent | next [-] | | > So no. LLMs are not brute force dummies. We are seeing increasingly emergent behavior in frontier models. Woah! That was a leap. "We are seeing ... emergent behaviors" does not follow from "it's not brute force". It is unsurprising that an LLM performs better than random! That's the whole point. It does not imply emergence. | |
| ▲ | qsera 2 hours ago | parent | prev [-] | | > We are seeing increasingly emergent behavior in frontier models. What? Did you see one crying? |
| |
| ▲ | virgildotcodes 4 hours ago | parent | prev [-] | | The real question is how to define intelligence in a way that isn't artificially constrained to eliminate all possibilities except our own. |
| |
| ▲ | qsera 3 hours ago | parent | prev | next [-] | | A random sentence can also generate correct solution to a problem once in a long while...does not mean that it "solved" anything.. | |
| ▲ | kranner 5 hours ago | parent | prev [-] | | Bet you didn't come up with that comment by first discarding a bunch of unsuitable comments. | | |
| ▲ | raincole 4 hours ago | parent | next [-] | | I hired an artist for an oil painting. The artist drew 10 pencil sketches and said "hmm I think this one works the best" and finished the painting based on it. I said he didn't one shot it and therefore he has no ability to paint, and refused to pay him. | |
| ▲ | virgildotcodes 5 hours ago | parent | prev | next [-] | | You learned what was unsuitable over your entire life until now by making countless mistakes in human interaction. A basic AI chat response also doesn't first discard all other possible responses. | |
| ▲ | bfivyvysj 5 hours ago | parent | prev | next [-] | | How often do you self edit before submitting? | |
| ▲ | ivalm 5 hours ago | parent | prev [-] | | because commenting is easy and solving hard problems is hard |
|
| |
| ▲ | jasonfarnon 4 hours ago | parent | prev | next [-] | | The link has an entire section on "The infeasibility of finding it by brute force." | |
| ▲ | konart 3 hours ago | parent | prev | next [-] | | But this is exactly how we do math. We start writing all those formulas etc and if at some point we realise we went th wrong way we start from the begignning (or some point we are sure about). | |
| ▲ | kelseyfrog 5 hours ago | parent | prev | next [-] | | How do you think mathematicians solve problems? | |
| ▲ | adventured 5 hours ago | parent | prev | next [-] | | No, that's precisely solving a problem. Shotgunning it is an entirely valid approach to solving something. If AI proves to be particularly great at that approach, given the improvement runway that still remains, that's fantastic. | |
| ▲ | lsc4719 5 hours ago | parent | prev [-] | | That's also the only way how humans solve hard problems. | | |
| ▲ | himata4113 5 hours ago | parent | next [-] | | Not always, humans are a lot better at poofing a solution into existence without even trying or testing. It's why we have the scientific method: we come up with a process and verify it, but more often than not we already know that it will work. Compared to AI, it thinks of every possible scientific method and tries them all. Not saying that humans never do this as well, but it's mostly reserved for when we just throw mud at a wall and see what sticks. | | |
| ▲ | coderenegade 3 hours ago | parent | next [-] | | That's just not true at all. There are entire fields that rest pretty heavily on brute force search. Entire theses in biomedical and materials science have been written to the effect of "I ran these tests on this compound, and these are the results", without necessarily any underlying theory more than a hope that it'll yield something useful. As for advances where there is a hypothesis, it rests on the shoulders of those who've come before. You know from observations that putting carbon in iron makes it stronger, and then someone else comes along with a theory of atoms and molecules. You might apply that to figuring out why steel is stronger than iron, and your student takes that and invents a new superalloy with improvements to your model. Remixing is a fundamental part of innovation, because it often teaches you something new. We aren't just alchemying things out of nothing. | |
| ▲ | virgildotcodes 5 hours ago | parent | prev | next [-] | | More often than not, far, far, far more often than not, we do not already know that it will work. For all human endeavors, from the beginning of time. If we get to any sort of confidence it will work it is based on building a history of it, or things related to "it" working consistently over time, out of innumerable other efforts where other "it"s did not work. | |
| ▲ | nextaccountic 4 hours ago | parent | prev [-] | | AI can one shot problems too, if they have the necessary tools in their training data, or have the right thing in context, or have access to tools to search relevant data. Not all AI solutions are iterative, trial and error. Also > humans are a lot better at (...) That's maybe true in 2026, but it's hard to make statements about "AI" in a field that is advancing so quickly. For most of 2025 for example, AI doing math like this wouldn't even be possible |
| |
| ▲ | jMyles 5 hours ago | parent | prev [-] | | There have been both inductive and deductive solutions to open math problems by humans in the past decade, including to fairly high-profile problems. |
|
|
|
|
| ▲ | alberth 6 hours ago | parent | prev | next [-] |
| For those, like me, who find the prompt itself of interest … > A full transcript of the original conversation with GPT-5.4 Pro can be found here [0] and GPT-5.4 Pro’s write-up from the end of that transcript can be found here [1]. [0] https://epoch.ai/files/open-problems/gpt-5-4-pro-hypergraph-... [1] https://epoch.ai/files/open-problems/hypergraph-ramsey-gpt-5... |
|
| ▲ | johnfn 6 hours ago | parent | prev | next [-] |
| I like to imagine that the number of consumed tokens before a solution is found is a proxy for how difficult a problem is, and it looks like Opus 4.6 consumed around 250k tokens. That means that a tricky React refactor I did earlier today at work was about half as hard as an open problem in mathematics! :) |
| |
| ▲ | chromacity 4 hours ago | parent | next [-] | | You're kidding, but it could be true? Many areas of mathematics are, first and foremost, incredibly esoteric and inaccessible (even to other mathematicians). For this one, the author stated that there might be 5-10 people who have ever made any effort to solve it. Further, the author believed it's a solvable problem if you're qualified and grind for a bit. In software engineering, if only 5-10 people in the world have ever toyed with an idea for a specific program, it wouldn't be surprising that the implementation doesn't exist, almost independent of complexity. There's a lot of software I haven't finished simply because I wasn't all that motivated and got distracted by something else. Of course, it's still miraculous that we have a system that can crank out code / solve math in this way. | | |
| ▲ | kuschku 2 hours ago | parent [-] | | If only 5-10 people have ever tried to solve something in programming, every LLM will start regurgitating your own decade-old attempt again and again, sometimes even with the exact comments you wrote back then (good to know it trained on my GitHub repos...), but you can spend upwards of 100mio tokens in gemini-cli or claude code and still not make any progress. It's afterall still a remix machine, it can only interpolate between that which already exists. Which is good for a lot of things, considering everything is a remix, but it can't do truly new tasks. |
| |
| ▲ | gf000 2 hours ago | parent | prev | next [-] | | I think it's more of a data vs intelligence thing. They are separate dimensions. There are problems that don't require any data, just "thinking" (many parts of math sit here), and there are others where data is the significant part (e.g. some simple causality for which we have a bunch of data). Certain problems are in-between the two (probably a react refactor sits there). So no, tokens are probably no good proxy for complexity, data heavy problems will trivially outgrow the former category. | |
| ▲ | nextaccountic 4 hours ago | parent | prev | next [-] | | That's why context management is so important. AI not only get more expensive if you waste tokens like that, it may perform worse too Even as context sizes get larger, this will likely be relevant. Specially since AI providers may jack up the price per token at any time. | |
| ▲ | ozozozd 5 hours ago | parent | prev | next [-] | | Try the refactor again tomorrow. It might have gotten easier or more difficult. | |
| ▲ | locknitpicker 2 hours ago | parent | prev | next [-] | | > I like to imagine that the number of consumed tokens before a solution is found is a proxy for how difficult a problem is (...) The number of tokens required to get to an output is a function of the sequence of inputs/prompts, and how a model was trained. You have LLMs quite capable of accomplishing complex software engineering work that struggle with translating valid text from english to some other languages. The translations can be improved with additional prompting but that doesn't mean the problem is more challenging. | |
| ▲ | sublinear 5 hours ago | parent | prev [-] | | You might be joking, but you're probably also not that far off from reality. I think more people should question all this nonsense about AI "solving" math problems. The details about human involvement are always hazy and the significance of the problems are opaque to most. We are very far away from the sensationalized and strongly implied idea that we are doing something miraculous here. | | |
| ▲ | johnfn 5 hours ago | parent | next [-] | | I am kind of joking, but I actually don't know where the flaw in my logic is. It's like one of those math proofs that 1 + 1 = 3. If I were to hazard a guess, I think that tokens spent thinking through hard math problems probably correspond to harder human thought than tokens spend thinking through React issues. I mean, LLMs have to expend hundreds of tokens to count the number of r's in strawberry. You can't tell me that if I count the number of r's in strawberry 1000 times I have done the mental equivalent of solving an open math problem. | | |
| ▲ | throw310822 5 hours ago | parent | next [-] | | You can spend countless "tokens" solving minesweeper or sudoku. This doesn't mean that you solved difficult problems: just that the solutions are very long and, while each step requires reasoning, the difficulty of that reasoning is capped. | |
| ▲ | gpm 5 hours ago | parent | prev | next [-] | | Some thoughts. 1. LLMs aren't "efficient", they seem to be as happy to spin in circles describing trivial things repeatedly as they are to spin in circles iterating on complicated things. 2. LLMs aren't "efficient", they use the same amount of compute for each token but sometimes all that compute is making an interesting decision about which token is the next one and sometimes there's really only one follow up to the phrase "and sometimes there's really only" and that compute is clearly unnecessary. 3. A (theoretical) efficient LLM still needs to emit tokens to tell the tools to do the obviously right things like "copy this giant file nearly verbatim except with every `if foo` replaced with `for foo in foo`. An efficient LLM might use less compute for those trivial tokens where it isn't making meaningful decisions, but if your metric is "tokens" and not "compute" that's never going to show up. Until we get reasonably efficient LLMs that don't waste compute quite so freely I don't think there's any real point in trying to estimate task complexity by how long it takes an LLM. | | | |
| ▲ | pinkmuffinere 5 hours ago | parent | prev [-] | | This is interesting, I like the thought about "what makes something difficult". Focusing just on that, my guess is that there are significant portions of work that we commonly miss in our evaluations: 1. Knowing how to state the problem. Ie, go from the vague problem of "I don't like this, but I do like this", to the more specific problem of "I desire property A". In math a lot of open problems are already precisely stated, but then the user has to do the work of _understanding_ what the precise stating is. 2. Verifying that the proposed solution actually is a full solution. This math problem actually illustrates them both really well to me. I read the post, but I still couldn't do _either_ of the steps above, because there's a ton of background work to be done. Even if I was very familiar with the problem space, verifying the solution requires work -- manually looking at it, writing it up in coq, something like that. I think this is similar to the saying "it takes 10 years to become an overnight success" |
| |
| ▲ | famouswaffles 5 hours ago | parent | prev | next [-] | | >The details about human involvement are always hazy and the significance of the problems are opaque to most. Not really. You're just in denial and are not really all that interested in the details. This very post has the transcript of the chat of the solution. | |
| ▲ | typs 5 hours ago | parent | prev [-] | | I mean the details are in the post. You can see the conversation history and the mathematician survey on the problem |
|
|
|
| ▲ | EternalFury 3 hours ago | parent | prev | next [-] |
| I am thinking there’s a large category of problems that can be solved by resampling existing proofs.
It’s the kind of brute force expedition machine can attempt relentlessly where humans would go mad trying.
It probably doesn’t really advance the field, but it can turn conjectures into theorems. |
| |
| ▲ | utopiah 2 hours ago | parent | next [-] | | Indeed, can't find my old comment on the topic but that's indeed the point, it's not how feasible it is to "find" new proof, but rather how meaningful those proofs are. Are they yet another iteration of the same kind, perfectly fitting the current paradigm and thus bringing very little to the table or are they radical and thus potentially (but not always) opening up the field? With brute force, or slightly better than brute force, it's most likely the first, thus not totally pointless but probably not very useful. In fact it might not even be worth the tokens spent. | |
| ▲ | PxldLtd an hour ago | parent | prev | next [-] | | I'm of the opinion that everything we've discovered is via combinatorial synthesis. Standing on the shoulders of giants and all that. I'm not sure I've seen any convincing argument that we've discovered anything ex nihilo. | |
| ▲ | simianwords 2 hours ago | parent | prev [-] | | How do you think you can design a benchmark to solve truly novel problems? |
|
|
| ▲ | virgildotcodes 5 hours ago | parent | prev | next [-] |
| I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique. It's this pervasive belief that underlies so much discussion around what it means to be intelligent. The null hypothesis goes out the window. People constantly make comments like "well it's just trying a bunch of stuff until something works" and it seems that they do not pause for a moment to consider whether or not that also applies to humans. If they do, they apply it in only the most restrictive way imaginable, some 2 dimensional caricature of reality, rather than considering all the ways that humans try and fail in all things throughout their lifetimes in the process of learning and discovery. There's still this seeming belief in magic and human exceptionalism, deeply held, even in communities that otherwise tend to revolve around the sciences and the empirical. |
| |
| ▲ | wrqvrwvq 4 hours ago | parent | next [-] | | It's only because humans came up with a problem, worked with the ai and verified the result that this achievement means anything at all. An ai "checking its own work" is practically irrelevant when they all seem to go back and forth on whether you need the car at the carwash to wash the car. Undoubtedly people have been passing this set of problems to ai's for months or years and have gotten back either incorrect results or results they didn't understand, but either way, a human confirmation is required. Ai hasn't presented any novel problems, other than the multitudes of social problems described elsewhere. Ai doesn't pursue its own goals and wouldn't know whether they've "actually been achieved". This is to say nothing of the cost of this small but remarkable advance. Trillions of dollars in training and inference and so far we have a couple minor (trivial?) math solutions. I'm sure if someone had bothered funding a few phds for a year we could have found this without ai. | | |
| ▲ | hodgehog11 2 hours ago | parent | next [-] | | Funding a few PhDs for a year costs orders of magnitude more than it did to solve this problem in inference costs. Also, this has been active research for some time. Or I guess the people working on it are just not as good as a random bunch of students? It's amazing the lengths that people go to maintain their worldview, even if it means belittling hardworking people. I take it you're not a mathematician. This is an achievement, regardless of whether you like LLMs or not, so let's not belittle the people working on these kinds of problems please. | | |
| ▲ | famouswaffles 2 hours ago | parent [-] | | >It's amazing the lengths that people go to maintain their worldview, even if it means belittling hardworking people. This is the most baffling and ironic aspects of these discussions. Human exceptionalism is what drives these arguments but the machines are becoming so good you can no longer do this without putting down even the top percenter humans in the process. Same thing happening all over this thread (https://news.ycombinator.com/item?id=47006594). And it's like they don't even realize it. |
| |
| ▲ | famouswaffles 3 hours ago | parent | prev [-] | | >It's only because humans came up with a problem, worked with the ai and verified the result that this achievement means anything at all. Replace ai with human here and that's...just how collaborative research works lol. |
| |
| ▲ | snemvalts 4 hours ago | parent | prev | next [-] | | The ability to learn and infer without absorbing millions of books and all text on internet really does make us special. And only at 20 watts! | | |
| ▲ | famouswaffles 4 hours ago | parent | next [-] | | Last I checked humans didn't pop into existence doing that. It happened after billions of years of brute force, trial and error evolution. So well done for falling into the exact same trap the OP cautions. Intelligence from scratch requires a mind boggling amount of resources, and humans were no different. | | |
| ▲ | sweezyjeezy 2 hours ago | parent [-] | | To be fair, it is still pretty remarkable what the human brain does, especially in early years - there is no text embedded in the brain, just a crazily efficient mechanism to learn hierarchical systems. As far as I know, AI intelligence cannot do anything similar to this - it generally relies on giga-scaling, or finetuning tasks similar to those it already knows. Regardless of how this arose, or if it's relevant to AGI, this is still a uniqueness of sorts. |
| |
| ▲ | throw310822 an hour ago | parent | prev [-] | | To be fair, the knowledge embedded in an LLM is also, at this point, a couple orders of magnitude (at least) larger than what the average human being can retain. So it's not like all those books and text in the internet are used just to bring them to our level, they go way beyond. |
| |
| ▲ | conz 4 hours ago | parent | prev | next [-] | | Re: "I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique." Perhaps this might better help you understand why this assumption still holds:
https://en.wikipedia.org/wiki/Orchestrated_objective_reducti... | | |
| ▲ | throw310822 an hour ago | parent | next [-] | | "Controversial theory justifies assumption". Because humans never hallucinate. | |
| ▲ | staticassertion 3 hours ago | parent | prev [-] | | It doesn't. I actually completely reject that theory, and it's nice to see that Wikipedia notes that it is "controversial". There are extremely good reasons to reject this theory. For one thing, any quantum effects are going to be quite tiny/ trivial because the brain is too large, hot, wet, etc, to see larger effects, so you have to somehow make a leap to "tiny effects that last for no time at all" to "this matters fundamentally in some massive way". It likely requires rejection of functionalism, or the acceptance that quantum states are required for certain functions. Both of those are heavy commitments with the latter implying that there are either functions that require structures that can't be instantiated without quantum effects or functions that can't be emulated without quantum effects, both of which seem extremely unlikely to me. Probably for the far more important reason, it doesn't solve any problem. It's just "quantum woo, therefor libertarian free will" most of the time. It's mostly garbage, maybe a tiny tiny bit of interesting stuff in there. It also would do nothing to indicate that human intelligence is unique. |
| |
| ▲ | staticassertion 3 hours ago | parent | prev | next [-] | | > I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique. Because, empirically, we have numerous unique and differentiable qualities, obviously. Plenty of time goes into understanding this, we have a young but rigorous field of neuroscience and cognitive science. Unless you mean "fundamentally unique" in some way that would persist - like "nothing could ever do what humans do". > People constantly make comments like "well it's just trying a bunch of stuff until something works" and it seems that they do not pause for a moment to consider whether or not that also applies to humans. I frankly doubt it applies to either system. I'm a functionalist so I obviously believe that everything a human brain does is physical and could be replicated using some other material that can exhibit the necessary functions. But that does not mean that I have to think that the appearance of intelligence always is intelligence, or that an LLM/ Agent is doing what humans do. | |
| ▲ | nicman23 2 hours ago | parent | prev | next [-] | | it is not the assumption that humans are unique. it is that statistical models cannot really think out of the box most of the time | |
| ▲ | slopinthebag 4 hours ago | parent | prev [-] | | > I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique. Uh, because up until and including now, we are...? | | |
| ▲ | virgildotcodes 4 hours ago | parent [-] | | Every living thing on Earth is unique. Every rock is unique in virtually infinite ways from the next otherwise identical rock. There are also a tremendous number of similarities between all living things and between rocks (and between rocks and living things). Most ways in which things are unique are arguably uninteresting. The default mode, the null hypothesis should be to assume that human intelligence isn't interestingly unique unless it can be proven otherwise. In these repeated discussions around AI, there is criticism over the way an AI solves a problem, without any actual critical thought about the way humans solve problems. The latter is left up to the assumption that "of course humans do X differently" and if you press you invariably end up at something couched in a vague mysticism about our inner-workings. Humans apparently create something from nothing, without the recombination of any prior knowledge or outside information, and they get it right on the first try. Through what, divine inspiration from the God who made us and only us in His image? | | |
| ▲ | gf000 an hour ago | parent | next [-] | | Humans are obviously unique in an interesting way. People only "move the goalpost" because it's not an interesting question that humans can do some great stuff, the interesting question is where the boundary is. (Whether against animals or AI). Some example goals which makes human trivially superior (in terms of intelligence): invention of nuclear bomb/plants, theory of relativity, etc. | |
| ▲ | slopinthebag 4 hours ago | parent | prev [-] | | I doubt you can even define intelligence sufficiently to argue this point. Since that's an ongoing debate without a resolution thus far. But you claimed that humans aren't unique. I think it's pretty obvious we are on many dimensions including what you might classify as "intelligence". You don't even necessarily have to believe in a "soul" or something like that, although many people do. The capabilities of a human far surpass every single AI to date, and much more efficiently as well. That we are able to brute-force a simulacrum of intelligence in a few narrow domains is incredible, but we should not denigrate humans when celebrating this. > There's still this seeming belief in magic and human exceptionalism, deeply held, even in communities that otherwise tend to revolve around the sciences and the empirical. Do you ever wonder why that is? I often wonder why tech has so many reductionist, materialist, and quite frankly anti-human, thinkers. | | |
| ▲ | famouswaffles 3 hours ago | parent | next [-] | | >The capabilities of a human far surpass every single AI to date What does this mean ? Are you saying every human could have achieved this result ?
Or this ? https://openai.com/index/new-result-theoretical-physics/ because well, you'd be wrong. >, and much more efficiently as well. That we are able to brute-force a simulacrum of intelligence in a few narrow domains is incredible, but we should not denigrate humans when celebrating this. Human intelligence was brute forced. Please let's all stop pretending like those billions of years of evolution don't count and we poofed into existence. And you can keep parroting 'simulacrum of intelligence' all you want but that isn't going to make it any more true. | | |
| ▲ | slopinthebag 3 hours ago | parent [-] | | > The capabilities of a human far surpass every single AI to date Meaning however you (reasonably) define intelligence, if you compare humans to any AI system humans are overwhelmingly more capable. Defining "intelligence" as "solving a math equation" is not a reasonable definition of intelligence. Or else we'd be talking about how my calculator is intelligent. Of course computers can compute faster than we can, that's aside the point. > Human intelligence was brute forced. No, I don't mean how the intelligence evolved or was created. But if you want to make that argument you're essentially asserting we have a creator, because to "brute force" something means it was intentional. Evolution is not an intentional process, unless you believe in God or a creator of sorts, which is totally fair but probably not what you were intending. But my point is that LLM's essentially arrive at answers by brute force through search. Go look at what a reasoning model does to count the letters in a sentence, or the amount of energy it takes to do things humans can do with orders of magnitude less (our brain runs on %20 of a lightbulb!). | | |
| ▲ | famouswaffles 3 hours ago | parent [-] | | >Meaning however you (reasonably) define intelligence, if you compare humans to any AI system humans are overwhelmingly more capable. Really ? Every Human ? Are you sure ? because I certainly wouldn't ask just any human for the things I use these models for, and I use them for a lot of things. So, to me the idea that all humans are 'overwhelmingly more capable' is blatantly false. >Defining "intelligence" as "solving a math equation" is not a reasonable definition of intelligence. What was achieved here or in the link I sent is not just "solving a math equation". >Or else we'd be talking about how my calculator is intelligent. If you said that humans are overwhelmingly more capable than calculators in arithmetic, well I'd tell you you were talking nonsense. >Of course computers can compute faster than we can, that's aside the point. I never said anything about speed. You are not making any significant point here lol >No, I don't mean how the intelligence evolved or was created. Well then what are you saying ? Because the only brute-forced aspect of LLM intelligence is its creation. If you do not mean that then just drop the point. >But if you want to make that argument you're essentially asserting we have a creator, because to "brute force" something means it was intentional. First of all, this makes no sense sorry. Evolution is regularly described as a brute force process by atheist and religious scientists alike. Second, I don't have any problem with people thinking we have a creator, although that instance still does necessarily mean a magic 'poof into existence' reality either. >But my point is that LLM's essentially arrive at answers by brute force through search. Sorry but that's just not remotely true. This is so untrue I honestly don't know what to tell you. This very post, with the transcript available is an example of how untrue it is. >or the amount of energy it takes to do things humans can do with orders of magnitude less (our brain runs on %20 of a lightbulb!). Meaningless comparison. You are looking at two completely different substrates. Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time. | | |
| ▲ | gf000 an hour ago | parent | next [-] | | > Really ? Every Human ? Yes, in many ways absolutely. Just because a model is a better "Google" than my dummy friend doesn't mean that this same friend is more capable at countless cases. > Meaningless comparison. You are looking at two completely different substrates. Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time. Isn't that just more proof how efficient the human brain is? Especially that a wire has much better properties than water solutions in bags. | | |
| ▲ | famouswaffles 41 minutes ago | parent [-] | | >Just because a model is a better "Google" than my dummy friend doesn't mean that this same friend is more capable at countless cases. People use LLMs for a lot of things. 'Better Google' is is a tiny slice of that. >Isn't that just more proof how efficient the human brain is? Sure. So what ? If a game runs poorly on one hardware and excellently on another, does that mean the game was fundamentally different between the 2 devices ? No, Of course not. |
| |
| ▲ | slopinthebag 2 hours ago | parent | prev [-] | | I never said that humans are better than LLM's along every axis. Rather, a reasonable definition of intelligence would necessarily encompass domains that LLM's are either incapable of or inferior to us. Here might be some definitions of intelligence for example: > The aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment. > "...the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills". > Goal-directed adaptive behavior. > a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation But even a housefly possesses levels of intelligence regarding flight and spacial awareness that dominates any LLM. Would it be fair to say a fly is more intelligent than an LLM? It certainly is along a narrow set of axis. > Because the only brute-forced aspect of LLM intelligence is its creation. I would consider statistical reasoning systems that can simulate aspects of human thought to be a form of brute force. Not quite an exhaustive search, but massively compressed experience + pattern matching. But regardless, even if both forms of intelligence arrived via some form of brute force, what is more important to me is the result of that - how does the process of employing our intelligence look. > This very post, with the transcript available is an example of how untrue it is. The transcript lacks the vector embeddings of the model's reasoning. It's literally just a summary from the model - not even that really. > Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time. You're so close to getting it lol | | |
| ▲ | famouswaffles 2 hours ago | parent [-] | | >I never said that humans are better than LLM's along every axis. Rather, a reasonable definition of intelligence would necessarily encompass domains that LLM's are either incapable of or inferior to us. So all humans are overwhelmingly more intelligent but cannot even manage to be as capable in a significant number of domains ? That's not what overwhelming means. >I would consider statistical reasoning systems that can simulate aspects of human thought to be a form of brute force. That is not really what “brute force” means. Pattern learning over a compressed representation of experience is not the same thing as exhaustive search. Calling any statistical method “brute force” just makes the term too vague to be useful. > what is more important to me is the result of that - how does the process of employing our intelligence look. But this is exactly where you are smuggling in assumptions. We do not actually understand the internal workings of either the human brain or frontier LLMs at the level needed to make confident claims like this. So a lot of what you are calling “the result” is really just your intuition about what intelligence is supposed to look like. And I do not think that distinction is as meaningful as you want it to be anyway. Flight is flight. Birds fly and planes fly. A plane is not a “simulacrum of flight” just because it achieves the same end by a different mechanism. >The transcript lacks the vector embeddings of the model's reasoning. It's literally just a summary from the model - not even that really. You do not need access to every internal representation to see that the model did not arrive at the answer by brute-forcing all possibilities. The observed behavior is already enough to rule that out. > Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time. >You're so close to getting it lol. No you don't understand what I'm saying. If we were to be more accurate to the brain in silicon, it would be even less efficient than LLMs, never mind humans. Does that mean how the brain works is wrong ? No it means we are dealing with 2 entirely different substrates and directly comparing efficiencies like that to show one is superior is silly. | | |
| ▲ | slopinthebag an hour ago | parent [-] | | > So all humans are overwhelmingly more intelligent but cannot even manage to be as capable in a significant number of domains When the amount of domains in which humans are more capable than LLM's vastly exceeds the amount of domains in which LLM's are more capable than humans, yes. I also agree that we don't have a great understanding of either human or LLM intelligence, but we can at least observe major differences and conclude that there are, in fact, major differences. In the same way we can conclude that both birds and planes have major differences, and saying that "there's nothing unique about birds, look at planes" is just a really weird thing to say. > If we were to be more accurate to the brain in silicon, it would be even less efficient than LLMs Do you think perhaps this massive difference points to there being a significant and foundational structural and functional difference between these types of intelligences? |
|
|
|
|
| |
| ▲ | blackcatsec 3 hours ago | parent | prev [-] | | > I often wonder why tech has so many reductionist, materialist, and quite frankly anti-human, thinkers. I think it comes from a position of arrogance/ego. I'll speak for the US here, since that's what I know the most; but the average 'techie' in general skews towards the higher intelligence numbers than the lower parts. This is a very, very broad stroke, and that's intentional to illustrate my point. Because of this, techie culture gains quite a bit of arrogance around it with regards to the masses. And this has been trained into tech culture since childhood. Whether it be adults praising us for being "so smart", or that we "figured out the VCR", or some other random tech problem that literally almost any human being can solve by simply reading the manual. What I've found, in the vast majority of technical problem solving cases that average people have challenges with, if they just took a few minutes to read a manual they'd be able to solve a lot of it themselves. In short, I don't believe as a very strong techie that I'm "smarter than most", but rather that I've taken the time to dive into a subject area that most other humans do not feel the need nor desire to do so. There are objectively hard problems in tech to solve, but the amount of people solving THOSE problems in the tech industry are few and far in between. And so the tech industry as a whole has spent the last decade or two spinning circles on increasingly complex systems to continue feeding their own egos about their own intelligence. We're now at a point that rather than solving the puzzle, most techies are creating incrementally complex puzzles to solve because they're bored of the puzzles that are in front of them. "Let me solve that puzzle by making a puzzle solver." "Okay, now let me make a puzzle solver creation tool to create puzzle solvers to solve the puzzle." and so forth and so forth. At the end of the day, you're still just solving a puzzle... But it's this arrogance that really bothers me in the tech bro culture world. And, more importantly, at least in some tech bro circles, they have realized that their target to gathering an exponential increase in wealth doesn't lie in creating new and novel ways to solve the same puzzles, but to try and tout AI as the greatest puzzle solver creation tool puzzle solver known to man (and let me grift off of it for a little bit). | | |
| ▲ | virgildotcodes 9 minutes ago | parent | next [-] | | It's funny because the fundamental thing I'm speaking out against is the arrogance of human exceptionalism. This whole debate about what it means to be intelligent or human just seems like we're making the same mistakes we've made over and over. Earth as the center of the universe, sun as the center of the universe, man as the only animal with consciousness and intellect, the anthropomorphic nature of the majority of the deities in our religions and the anthropocentric purpose of the universe within those religions... I think this desire to believe that we are special, that the universe in some way does ultimately revolve around us, is seemingly a deep need in our psyche but any material analysis of our universe shows that it is extremely unlikely that we hold that position. | |
| ▲ | slopinthebag 2 hours ago | parent | prev [-] | | I largely agree with you, but I also see this same type of thinking appear in people who I know are not arrogant - at least in the techbroisk way. |
|
|
|
|
|
|
| ▲ | zurfer an hour ago | parent | prev | next [-] |
| "In this scaffold, several other models were able to solve the problem as well: Opus 4.6 (max), Gemini 3.1 Pro, and GPT-5.4 (xhigh)." I find that very surprising. This problem seems out of reach 3 months ago but now the 3 frontier models are able to solve it. Is everybody distilling each others models? Companies sell the same data and RL environment to all big labs? Anybody more involved can share some rumors? :P I do believe that AI can solve hard problems, but that progress is so distributed in a narrow domain makes me a bit suspicious somehow that there is a hidden factor. Like did some "data worker" solve a problem like that and it's now in the training data? |
|
| ▲ | pugio 3 hours ago | parent | prev | next [-] |
| I've never yet been "that guy" on HN but... the title seems misleading. The actual title is "A Ramsey-style Problem on Hypergraphs" and a more descriptive title would be "All latest frontier models can solve a frontier math open problem". (It wasn't just GPT 5.4) Super cool, of course. |
|
| ▲ | qnleigh 4 hours ago | parent | prev | next [-] |
| Their 'Open Problems page' linked below gives some interesting context. They list 15 open problems in total, categorized as 'moderately interesting,' 'solid result,' 'major advance,' or 'breakthrough.' The solved problem is listed as 'moderately interesting,' which is presumably the easiest category. But it's notable that the problem was selected and posted here before it was solved. I wonder how long until the other 3 problems in this category are solved. https://epoch.ai/frontiermath/open-problems |
| |
| ▲ | fnordpiglet 4 hours ago | parent [-] | | I’d hope this isn’t a goal post move - an open math problem of any sort being solved by a language model is absolute science fiction. | | |
| ▲ | zozbot234 4 hours ago | parent | next [-] | | That's been achieved already with a few Erdös problems, though those tended to be ambiguously stated in a way that made them less obviously compelling to humans. This problem is obscure, even the linked writeup admits that perhaps ~10 mathematicians worldwide are genuinely familiar with it. But it's not unfeasibly hard for a few weeks' or months' work by a human mathematician. | | | |
| ▲ | tovej 3 hours ago | parent | prev [-] | | It is not. You're operating under the assumption that all open math problems are difficult and novel. This particular problem was about improving the lower bound for a function tracking a property of hypergraphs (undirected graphs where edges can contain more than two vertices). Both constructing hypergraphs (sets) and lower bounds are very regular, chore type tasks that are common in maths. In other words, there's plenty of this type of proof in the training data. LLMs kind of construct proofs all the time, every time they write a program. Because every program has a corresponding proof. It doesn't mean they're reasoning about them, but they do construct proofs. This isn't science fiction. But it's nice that the LLMs solved something for once. | | |
| ▲ | utopiah 2 hours ago | parent [-] | | > nice that the LLMs solved something for once. That sentence alone needs unpacking IMHO, namely that no LLM suddenly decided that today was the day it would solve a math problem. Instead a couple of people who love mathematics, doing it either for fun or professionally, directly ask a model to solve a very specific task that they estimated was solvable. The LLM itself was fed countless related proofs. They then guided the model and verified until they found something they considered good enough. My point is that the system itself is not the LLM alone, as that would be radically more impressive. | | |
| ▲ | tovej 41 minutes ago | parent [-] | | I 100% agree. The LLM was just used to autocomplete a ready-made strategy. |
|
|
|
|
|
| ▲ | sigbottle 17 minutes ago | parent | prev | next [-] |
| I feel like reading some of these comments, some people need to go and read the history of ideas and philosophy (which is easier today than ever before with the help of LLMs!) It's like I'm reading 17th-18th century debates spurring the same arguments between rationalists and empiricists, lol. Maybe we're due for a 21st century Kant. |
|
| ▲ | 6thbit 7 hours ago | parent | prev | next [-] |
| > Subsequent to this solve, we finished developing our general scaffold for testing models on FrontierMath: Open Problems. In this scaffold, several other models were able to solve the problem as well: Opus 4.6 (max), Gemini 3.1 Pro, and GPT-5.4 (xhigh). Interesting. Whats that “scaffold”? A sort of unit test framework for proofs? |
| |
| ▲ | inkysigma 6 hours ago | parent [-] | | I think in this context, scaffolds are generally the harness that surrounds the actual model. For example, any tools, ways to lay out tasks, or auto-critiquing methods. I think there's quite a bit of variance in model performance depending on the scaffold so comparisons are always a bit murky. | | |
|
|
| ▲ | pinkmuffinere 6 hours ago | parent | prev | next [-] |
| As someone with only passing exposure to serious math, this section was by far the most interesting to me: > The author assessed the problem as follows. > [number of mathematicians familiar, number trying, how long an expert would take, how notable, etc] How reliably can we know these things a-priori? Are these mostly guesses? I don't mean to diminish the value of guesses; I'm curious how reliable these kinds of guesses are. |
| |
| ▲ | qnleigh 5 hours ago | parent | next [-] | | For number of mathematicians familiar with and actively working on the problem, modern mathematics research is incredibly specialized, so it's easy to keep track of who's working on similar problems. You read each other's papers, go to the same conferences etc. For "how long an expert would take" to solve a problem, for truly open problems I don't think you can usually answer this question with much confidence until the problem has been solved. But once it has been solved, people with experience have a good sense of how long it would have taken them (though most people underestimate how much time they need, since you always run into unanticipated challenges). | |
| ▲ | ramblingrain 5 hours ago | parent | prev | next [-] | | Read about Paul Erdös... not all math is the Riemann Hypothesis, there is yeoman's work connecting things and whatever... | |
| ▲ | jasonfarnon 4 hours ago | parent | prev [-] | | Certainly knowing how many/which people are working on a problem you are looking at, and how long it will take you to solve it, are critical skills in being a working researcher. What kind of answer are you looking for? It's hard to quantify. Most suck at this type of assessment as a PhD student and then you get better as time goes on. |
|
|
| ▲ | tombert 5 hours ago | parent | prev | next [-] |
| I was trying to get Claude and Codex to try and write a proof in Isabelle for the Collatz conjecture, but annoyingly it didn't solve it, and I don't feel like I'm any closer than I was when I started. AI is useless! In all seriousness, this is pretty cool. I suspect that there's a lot of theoretical math that haven't been solved simply because of the "size" of the proof. An AI feedback loop into something like Isabelle or Lean does seem like it could end up opening up a lot of proofs. |
| |
| ▲ | qnleigh 5 hours ago | parent [-] | | I got Gemini to find a polynomial-time algorithm for integer factoring, but then I mysteriously got locked out of my Google account. They should at least refund me the tokens. | | |
| ▲ | fxtentacle 5 hours ago | parent [-] | | That sounds like the start of a very lucrative career. Are you sure it was Gemini and not an AI competitor offering affiliate commission? ;) |
|
|
|
| ▲ | bandrami 3 hours ago | parent | prev | next [-] |
| It's deeply surprising to me that LLMs have had more success proving higher math theorems than making successful consumer software |
| |
| ▲ | 59nadir 3 hours ago | parent | next [-] | | Software developers have spent decades at this point discounting and ignoring almost all objective metrics for software quality and the industry as a whole has developed a general disregard for any metric that isn't time-to-ship (and even there they will ignore faster alternatives in favor of hyped choices). (Edit: Yes, I'm aware a lot of people care about FP, "Clean Code", etc., but these are all red herrings that don't actually have anything to do with quality. At best they are guidelines for less experienced programmers and at worst a massive waste of time if you use more than one or two suggestions from their collection of ideas.) Most of the industry couldn't use objective metrics for code quality and the quality of the artifacts they produce without also abandoning their entire software stack because of the results. They're using the only metric they've ever cared about; time-to-ship. The results are just a sped up version of what we've had now for more than two decades: Software is getting slower, buggier and less usable. If you don't have a good regulating function for what represents real quality you can't really expect systems that just pump out code to actually iterate very well on anything. There are very few forcing functions to use to produce high quality results though iteration. | | |
| ▲ | bandrami 3 hours ago | parent [-] | | But we don't even seem to be getting faster time-to-ship in any way that anybody can actually measure; it's always some vague sense of "we're so much more productive". | | |
| ▲ | 59nadir 2 hours ago | parent [-] | | That's a fair observation and one that I don't really have an answer for. I can say from personal experience that I believe that shipping nonsense code has never been faster. That's just an anecdote, obviously. We need a bigger version of the METR study on perceived vs. real productivity[0], I guess. It's a thankless job, though, since people will assume/state even at publication time that "Everything has progressed so much, those models and agents sucked, everything is 10 times better now!" and you basically have to start a new study, repeat ad infinitum. One problem that really complicates things is that the net competency of these models seems really spotty and uneven. They're apparently out here solving math problems that seemingly "require thinking", but at the same time will write OpenGL code that will produce black screens on basically every driver, not produce the intended results and result in hours of debugging time for someone not familiar enough. That's despite OpenGL code being far more prevalent out there than math proofs, presumably. How do you reliably even theorize about things like this when something can be so bad and (apparently) so good at the same time? 0 - https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o... |
|
| |
| ▲ | staticassertion 2 hours ago | parent | prev [-] | | They haven't, not at all as far as I can tell. This math problem appears to be a nice chore to be solved, the equivalent to "Claude, optimize this code" or "Write a parser", which is being done 100000x a day. | | |
| ▲ | famouswaffles an hour ago | parent | next [-] | | The original researchers who proposed this problem tried and failed multiple times to solve it. Does that sound like a 'nice chore to be solved' to you ? | | |
| ▲ | staticassertion an hour ago | parent [-] | | That's interesting context, where do you see that? I'm going off of the label "Moderately interesting". edit: I see in the full write up that the contributor says that they'd estimate an expert would take 1-3 months to do this. They also note that they came up with this solution independently but hadn't confirmed it. | | |
| ▲ | famouswaffles an hour ago | parent [-] | | https://epochai.substack.com/p/first-ai-solution-on-frontier... >The newly-solved problem came from Will Brian, who had placed it in the Moderately Interesting category. It is a conjecture from a paper he wrote with Paul Larson in 2019. They were unable to solve it at the time, or in several attempts since. Brian had this to say. | | |
| ▲ | staticassertion an hour ago | parent [-] | | I actually still don't see the source for them trying several times, but we can take that for granted. Regardless, as I said: 1. It's labeled as "moderately interesting" 2. They said that they expect an expert could solve it in 1-3 months 3. They had already come up with the solution that the AI had but weren't convinced it would have worked So how big was the gap here, do you think? | | |
| ▲ | famouswaffles an hour ago | parent | next [-] | | Yes, a "moderately interesting" Open problem. I can't think of any chores that would take an expert months to complete. I can't think of any chores that I've completed but was then 'unconvinced could work'. Please sit down and think about what you are saying here. Are we still talking about chores ? One of the more strange phenomena with machines getting better and the incessant need (seemingly driven by human exceptionalism) to downplay each result, is that you just end up belittling humans in the process. This is significant. Your analogy is wrong. It's fine to admit it. | | |
| ▲ | staticassertion 31 minutes ago | parent [-] | | Writing a complex parser or certainly a compiler is a 1 - 3 month project, for example. Again, I'm not trying to downplay this, but to frame this accurately. I think an AI being able to build a parser/ compiler is cool too. > One of the more strange phenomena with machines getting better and the incessant need (seemingly driven by human exceptionalism) to downplay each result, is that you just end up belittling humans in the process. I don't believe in human exceptionalism at all, don't attribute positions to me. | | |
| ▲ | famouswaffles 10 minutes ago | parent [-] | | >Writing a complex parser or certainly a compiler is a 1 - 3 month project, for example. 1. Estimating time completion of something that has been done multiple times before and an open problem that has not yet been solved is a different matter entirely. 1 to 3 months is an educated guess and more likely than not, an underestimate. 2. I do not think months long complex compilers and parsers are being routinely completed by LLMs as your original comment implied. Regardless, they are different classes of problems. | | |
| ▲ | staticassertion 5 minutes ago | parent [-] | | I don't get what either of your points is intended to demonstrate. Let's revisit the first post I replied to: > It's deeply surprising to me that LLMs have had more success proving higher math theorems than making successful consumer software As far as I can tell, they absolutely have not had more success in this area relative to making successful consumer software. |
|
|
| |
| ▲ | famouswaffles an hour ago | parent | prev [-] | | Also, the full write up does not say the researchers solved it. | | |
| ▲ | staticassertion 32 minutes ago | parent [-] | | > I had previously wondered if the AI’s approach might be possible, but it seemed hard to work out. They didn't solve it, that's fair. They did consider the approach already. |
|
|
|
|
| |
| ▲ | calf 2 hours ago | parent | prev [-] | | But the title claims it is a "frontier" math problem, so which is it really. |
|
|
|
| ▲ | letmetweakit 2 hours ago | parent | prev | next [-] |
| Impressive, but it will take away so much sense of accomplishment from so many people. I find that really sad. |
|
| ▲ | osti 6 hours ago | parent | prev | next [-] |
| Seems like the high compute parallel thinking models weren't even needed, both the normal 5.4 and gemini 3.1 pro solved it. Somehow Gemini 3 deepthink couldn't solve it. |
|
| ▲ | danbruc an hour ago | parent | prev | next [-] |
| Do they also publish the raw output of the model, i.e. not only the final response but also everything generated for internal reasoning or tool use? |
|
| ▲ | anematode 2 hours ago | parent | prev | next [-] |
| Fantastic and exciting stuff! I wonder how much of this meteoric progress in actually creating novel mathematics is because the training data is of a much higher standard than code, for example. |
|
| ▲ | karmasimida 6 hours ago | parent | prev | next [-] |
| No denial at this point, AI could produce something novel, and they will be doing more of this moving forward. |
| |
| ▲ | XCSme 5 hours ago | parent | next [-] | | Not sure if AI can have clever or new ideas, it still seems to be it combines existing knowledge and executes algoritms. I am not necessarily saying humans do something different either, but I have yet to see a novel solution from an AI that is not simply an extrapolation of current knowledge. | | |
| ▲ | qnleigh 4 hours ago | parent | next [-] | | Speaking as a researcher, the line between new ideas and existing knowledge is very blurry and maybe doesn't even exist. The vast majority of research papers get new results by combining existing ideas in novel ways. This process can lead to genuinely new ideas, because the results of a good project teach you unexpected things. My biggest hesitation with AI research at the moment is that they may not be as good at this last step as humans. They may make novel observations, but will they internalize these results as deeply as a human researcher would? But this is just a theoretical argument; in practice, I see no signs of progress slowing down. | | |
| ▲ | coderenegade 2 hours ago | parent [-] | | This is my take as well. A human who learns, say, a Towers of Hanoi algorithm, will be able to apply it and use it next time without having to figure it out all over again. An LLM would probably get there eventually, but would have to do it all over again from scratch the next time. This makes it difficult combine lessons in new ways. Any new advancement relying on that foundational skill relies on, essentially, climbing the whole mountain from the ground. I suppose the other side of it is that if you add what the model has figured out to the training set, it will always know it. |
| |
| ▲ | dotancohen 5 hours ago | parent | prev | next [-] | | We call that Standing On The Shoulders Of Giants and revere Isaac Newton as clever, even though he himself stated that he was standing on the shoulders of giants. | |
| ▲ | nkozyra 5 hours ago | parent | prev | next [-] | | Clever/novel ideas are very often subtle deviations from known, existing work. Sometimes just having the time/compute to explore the available space with known knowledge is enough to produce something unique. | |
| ▲ | glalonde 4 hours ago | parent | prev | next [-] | | "extrapolation" literally implies outside the extents of current knowledge. | |
| ▲ | aoeusnth1 3 hours ago | parent | prev | next [-] | | How would you know if it wasn't an extrapolation of current knowledge? Can you point me to somethings humans have done which isn't an extrapolation? | |
| ▲ | salomonk_mur 5 hours ago | parent | prev [-] | | There is no such thing. All new ideas are derived from previous experiences and concepts. | | |
| ▲ | Madmallard 2 hours ago | parent [-] | | The difference people are neglecting to point out is the experiences we have versus the experiences the AI has. We have at least 5 senses, our thoughts, feelings, hormonal fluctuations, sleep and continuous analog exposure to all of these things 24/7. It's vastly different from how inputs are fed into an LLM. On top of that we have millions of years of evolution toward processing this vast array of analog inputs. |
|
| |
| ▲ | slashdave 4 hours ago | parent | prev | next [-] | | I mean, I can run a pseudo random number generator, and produce something novel too. | |
| ▲ | staticassertion 2 hours ago | parent | prev [-] | | Is this novel? It's new. But we already know AI can generate new things, any statistical reassembly of any content will generate new things. It's not to downplay this, but it's unclear what "novel" means here or what you think the implications are. |
|
|
| ▲ | daveguy 5 hours ago | parent | prev | next [-] |
| New goalpost, and I promise I'm not being facetious at all, genuinely curious: Can an AI pose an frontier math problem that is of any interest to mathematicians? I would guess 1) AI can solve frontier math problems and 2) can pose interesting/relevant math problems together would be an "oh shit" moment. Because that would be true PhD level research. |
| |
| ▲ | kgeist 18 minutes ago | parent | next [-] | | Considering that an LLM simply remixes what it finds in its learned distribution over text, it's possible that it can pose new math problems by identifying gaps ("obvious" in restrospect) that humans may have missed (like connecting two known problems to pose a new one). What LLMs can't currently do is pose new problems by observing the real world and its ramifications, like that moving sofa problem. | |
| ▲ | EternalFury 2 hours ago | parent | prev [-] | | Yes. I doubt it can do that. |
|
|
| ▲ | vlinx 5 hours ago | parent | prev | next [-] |
| This is a remarkable result if confirmed independently. The gap between solving competition problems and open research problems has always been significant - bridging that gap suggests something qualitatively different in the model capabilities. |
|
| ▲ | t0lo 3 hours ago | parent | prev | next [-] |
| What are the odds that this is because Openai is pouring more money into high publicity stunts like this- rather than its model actually being better than Anthropics? |
|
| ▲ | measurablefunc 5 hours ago | parent | prev | next [-] |
| I guess this means AI researchers should be out of jobs very soon. |
|
| ▲ | an0malous 6 hours ago | parent | prev | next [-] |
| I feel like there’s a fork in our future approaching where we’ll either blossom into a paradise for all or live under the thumb of like 5 immortal VCs |
| |
| ▲ | XCSme 5 hours ago | parent [-] | | Change is always hard, even if it will be good in 20 years, the transitions are always tough. | | |
| ▲ | reverius42 5 hours ago | parent [-] | | Sometimes the transition is tough and then the end state is also worse! Hoping that won't be the case with AI but we may need some major societal transformations to prevent it. |
|
|
|
| ▲ | samthegitty_24 37 minutes ago | parent | prev | next [-] |
| wow nice |
|
| ▲ | jvdsf 3 hours ago | parent | prev | next [-] |
| Really? He was steering the wheel the whole time. GPT didn't do the math. |
| |
|
| ▲ | renewiltord 6 hours ago | parent | prev | next [-] |
| Fantastic news! That means with the right support tooling existing models are already capable of solving novel mathematics. There’s probably a lot of good mathematics out there we are going to make progress on. |
|
| ▲ | gnarlouse 5 hours ago | parent | prev | next [-] |
| We only get one shot. |
|
| ▲ | data_maan 5 hours ago | parent | prev [-] |
| A model to whose internals we don't have access solved a problem we didn't knew was in their datasets. Great, I'm impressed |