| ▲ | adamgordonbell 11 hours ago |
| Here is the chat: don't search the internet. This is a test to see how well you can craft non-trivial, novel and creative proofs given a "number theory and primitive sets" math problem. Provide a full unconditional proof or disproof of the problem.
{{problem}}
REMEMBER - this unconditional argument may require non-trivial, creative and novel elements.
Then "Thought for 80m 17s"https://chatgpt.com/share/69dd1c83-b164-8385-bf2e-8533e9baba... |
|
| ▲ | urutom 4 hours ago | parent | next [-] |
| What I find fascinating about the shared prompt isn’t just the result, but the visible thinking process. Math papers usually skip all the messy parts and just present the polished proof. But here you get something closer to their notepad. I also find it oddly endearing when the AI says things like “Interesting!” It almost feels like a researcher encouraging themselves after a small progress. It gives me rare feeling of watching the search itself, not just the final result. |
| |
| ▲ | bertil 33 minutes ago | parent | next [-] | | > the AI says things like “Interesting!” My experience of those utterance is that it’s purely phatic mimicry: they lack genuine intuitive surprise, it’s just marking a very odd shift in direction. The problem isn’t the lack of path, is that the rhetorical follow-up to those leaps are usually relevant results, so they stream-of-token ends up rapidly over-playing its own conviction. That’s why it’s necessary (and often ineffective) to tell them to validate their findings thoroughly: too much of their training is “That’s odd” followed by “Eureka!” and not “Nevermind…” | | |
| ▲ | jackcarter 7 minutes ago | parent | next [-] | | It’s funny that this is probably due to bias in the training texts, right? Humans are way more likely to publish their “Eureka!” moments than their screwups… if they did, maybe models would’ve exhibit this behavior. Now that AI labs have all these “Nevermind” texts to train on, maybe it’s getting easier to correct? (Would require some postprocessing to classify the AI outputs as successful or not before training) | |
| ▲ | sigbottle 22 minutes ago | parent | prev | next [-] | | I think that a lot of models have to sprinkle in a lot of "fluff" in their thinking to stay within the right distribution. They only have language as their only medium; the way we annotate context is via brackets and then training them to hopefully respect the brackets. I'd imagine that either top labs explicitly train, or through the RL process the models implicitly learn, to spam tokens to keep them 'within distribution' since everything's going through the same channel and there's no fine grained separation between things. Philosophically, it's not like you're a detached observer who simply reasons over all possible hypotheses. Ever get stuck in a dead end and find it hard to dig yourself out? If you were a detached observer, it'd be pretty easy to just switch gears. But it's not (for humans). | |
| ▲ | 24 minutes ago | parent | prev [-] | | [deleted] |
| |
| ▲ | rafaelmn 30 minutes ago | parent | prev | next [-] | | This is another underrated benefit of working with LLMs. When I work I don't take detailed notes about my thinking, decisions, context, etc. I just focus on code. If I get interrupted it takes me a while to get back into the flow. With LLMs I just read back a few turns and I'm back in the loop. | |
| ▲ | andrepd 5 minutes ago | parent | prev | next [-] | | The simulacrum of a thing is not the thing! Not only is the "interesting!" unrelated to any "thought process", the whole """thinking""" output is not a representation of a thought process but merely a post-facto confabulation that sounds appropriately human-like. | |
| ▲ | notahacker an hour ago | parent | prev | next [-] | | The actual iteration through various learned approaches to dealing with problems I'd probably find fascinating if I understood the maths! Especially if I knew it well enough to know which approaches were conventional and which weren't. I find the AI pronouncing things "interesting!" less interesting on the basis that even though in this case it crops up in the thinking rather than flattering the user in the chat, it's almost as much of an AI affectation as the emdash. | | |
| ▲ | jdmichal an hour ago | parent [-] | | I always assumed the "interesting!" markers were actual markers. A kind of tag for the system to annotate its context. | | |
| ▲ | notahacker 29 minutes ago | parent [-] | | Probably does function like that in terms of highlighting context, in this case probably to the system's benefit. But in general exclamations of "interesting!" seems like the stereotypical AI default towards being effusive, and we've all seen the chat logs where AI trained to write that way responding with "interesting", "great insight!" towards a user's increasingly dubious inputs is an antipattern... |
|
| |
| ▲ | cubefox 14 minutes ago | parent | prev [-] | | [dead] |
|
|
| ▲ | petra an hour ago | parent | prev | next [-] |
| I don't haven ChatGPT but Gemini and Claude. But how do you make a language model think for 80 minutes ??? |
| |
| ▲ | staticassertion 2 minutes ago | parent | next [-] | | In my experience, you can tell them "Don't stop working on this until complete" and they'll go for an hour or more. | |
| ▲ | zeven7 9 minutes ago | parent | prev | next [-] | | I have Gemini and ChatGPT and keep them on the highest thinking settings. ChatGPT will regularly think 40-60 minutes on the same problem that Gemini will think 10-15 minutes on. The quality of ChatGpt’s response is usually a little higher but not that much higher. My takeaway is Gemini is better at thinking faster, maybe has better more dedicated hardware behind it, and I use Gemini if I want a faster answer but ChatGPT I’d I want to push the quality of the answer a little higher. | |
| ▲ | somewhatgoated an hour ago | parent | prev | next [-] | | It has an “high effort” mode that makes it think really long | |
| ▲ | baxtr an hour ago | parent | prev [-] | | Give it hard enough problems? |
|
|
| ▲ | chvid 3 hours ago | parent | prev | next [-] |
| I am curious if there is a “harness” for maths out there (like the system prompt and tool collection in Claude code but for maths instead of coding)? Asking the llm to structure its response in plan and implementation, allowing it to call tools like python, sage, lean etc. |
| |
| ▲ | brandensilva 2 hours ago | parent [-] | | Also curious about this, it seems like it would be important to guide these tools more specifically based on the domain of expertise. |
|
|
| ▲ | nycdatasci 9 hours ago | parent | prev | next [-] |
| Tried w/ 5.5 Pro, Extended Thinking. 17 minutes: ----------------------------- Yes. In fact the proposed bound is true, and the constant 1 is sharp. Let w(a)= 1/alog(a) I will prove that, uniformly for every primitive A⊂[x,∞),
∑w(a)≤1+O(1/log(x))
,
which is stronger than the requested 1+o(1). https://chatgpt.com/share/69ed8e24-15e8-83ea-96ac-784801e4a6... |
| |
|
| ▲ | cryptoegorophy 10 hours ago | parent | prev | next [-] |
| Mine took 20min. Pro.
https://chatgpt.com/share/69ed83b1-3704-8322-bcf2-322aa85d7a...
But I wish I was math smart to know if it worked or not. |
| |
| ▲ | liweic 5 hours ago | parent | next [-] | | Wired enough, Pro+extended with the same prompt, just output directly without thinking: https://chatgpt.com/s/t_69edd2d9dc048191b1476db92c0dedf8
. Does this mean the result was cached or that it simply routes to a different model silently based on the user? | | |
| ▲ | Vachyas 4 hours ago | parent [-] | | The link you provided is for a canvas I think rather than the convo |
| |
| ▲ | vjerancrnjak 8 hours ago | parent | prev [-] | | Ask it to formalize it in Lean. | | |
| ▲ | utopiah 7 hours ago | parent | next [-] | | If they aren't "smart enough" to know if it work they most likely are also unable to verify if the Lean formalization is indeed the one that matches the problem they were trying to solve. | | |
| ▲ | timjver 5 hours ago | parent [-] | | Verifying that every step in a (potentially long) proof is sound can of course be much, much harder than verifying that a definition is correct. That's kind of the whole point. | | |
| ▲ | LeCompteSftware 5 hours ago | parent [-] | | That's not what the parent comment meant. They meant checking the Lean-language definitions actually match the mathematical English ones, and that the Lean theorems match the ones in the paper. If that's true then you don't actually need to check the proofs. But you absolutely need to check the definitions, and you can't really do that without sufficient mathematical maturity. | | |
| ▲ | smallnamespace 4 hours ago | parent [-] | | Yes, and the child comment’s point is that formalizing the problem is likely easier than having the LLM verify that each step of a long deduction is correct, which is why Lean might be helpful. | | |
| ▲ | LeCompteSftware 2 hours ago | parent [-] | | But both of you are ignoring the parent comment! Actually you're ignoring the context of the thread. Originally someone said "I wish I was math smart to know if [this vibe-mathematics proof] worked or not." They did NOT say "I'd like to check but I am too lazy." Suggesting "ask it to formalize it in Lean" is useless if you're not mathematically mature enough to understand the proof, since that means you're not mathematically mature enough to understand how to formalize the problem. Then "likely easier" is a moot point. A Lean program you're not knowledgeable enough to sanity-check is precisely as useless as a math proof you're not knowledgeable enough to read. | | |
|
|
|
| |
| ▲ | dbdr 8 hours ago | parent | prev | next [-] | | That's great if it works. But it's way harder to produce a formal proof. So my expectation is that this will fail for most difficult problems, even when the non-formal proof is correct. | |
| ▲ | DonHopkins 6 hours ago | parent | prev [-] | | Formalize this in the form of a Iranian Lego Trump Dis Rap video. |
|
|
|
| ▲ | sfdlkj3jk342a 2 hours ago | parent | prev | next [-] |
| When using the web interface for ChatGPT like this, is there any way to tell which model is actually being used? |
|
| ▲ | DeathArrow 4 hours ago | parent | prev | next [-] |
| >don't search the internet. I think this was key. Otherwise the LLM could think it can't be done. |
| |
| ▲ | amelius 2 hours ago | parent | next [-] | | But it was trained on the internet. | |
| ▲ | embedding-shape 3 hours ago | parent | prev [-] | | "Knowing" (guessing really) what is possible and not is a huge deciding factor in if you can do that thing or not, meaning if you "know" it isn't possible you'll probably never be able to do it, but if you didn't know it wasn't possible, it is possible :) |
|
|
| ▲ | ipaddr 11 hours ago | parent | prev | next [-] |
| Tried the same prompt and ended up no where close on the free plan. |
| |
| ▲ | jasonfarnon 10 hours ago | parent | next [-] | | Is there a known lag that it takes the Pro plan's abilities to migrate to the free plans? | | |
| ▲ | brianjking 10 hours ago | parent | next [-] | | GPT 5.5 Pro is not available to any plan outside of ChatGPT Pro ($100 or $200) tier or the API as far as consumer access. | | |
| ▲ | jasonfarnon 10 hours ago | parent | next [-] | | Yes, but don't we expect GPT 5.5 Pro will eventually be a free tier? Maybe I'm missing something because I only use the free tier. But the free tier has gotten way better over the last few years. I'm pretty sure, based on descriptions on this site from paid subscribers, that the free tier now is better than the paid tier of say 2 years ago. That's the lag I'm wondering about. | | |
| ▲ | manfromchina1 9 hours ago | parent | next [-] | | Free ChatGPT is like a fast car with a barely responsive steering wheel. Guardrails on that thing are insane. Even for math. It wont let you think. It will try to fix mistakes you havent even made yet based on intent that was ascribed to you for no reason. It veers off in some crazy directions thinking that's what you meant and trying to address even a little bit of that creates almost a combinatorial explosion of even more wrong things. Is why I stick to Claude. The latter is chill and only addresses what you had typed. Isn't verbose and actually asks you what you getting at with your post. That said, ChatGPT is more technical and can easily solve math problems that stump Claude. | | |
| ▲ | nextaccountic 7 hours ago | parent [-] | | So this doesn't happen in the paid plans of ChatGPT? But why? | | |
| ▲ | virgildotcodes 4 hours ago | parent [-] | | Paid plans give you access to much larger, more intelligent models which have thinking enabled (inference time compute). In the example here you can see GPT Pro taking 20-80 minutes to respond with the proof. All this is far more expensive to serve so it’s locked away behind paid plans. |
|
| |
| ▲ | vessenes 9 hours ago | parent | prev | next [-] | | I do not think this is true. You will continue to get smaller, cheaper-to-host models in the free tier that are distilled from current and former frontier models. They will continue to improve, but I’d be very surprised if, e.g., 5.4-mini (I think this is the free tier model) beat o3 on many benchmarks, or real world use cases. I won’t even leave chatGPT on “Auto” under any circumstances - it’s vastly worse on hallucinations, sycophancy, everything, basically. Anyway, your needs may be met perfectly fine on the free tier product, but you’re using a very different product than the Pro tier gets. | |
| ▲ | 6 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | hyraki 10 hours ago | parent | prev [-] | | You should pay for it if you find value in it. | | |
| |
| ▲ | 9 hours ago | parent | prev [-] | | [deleted] |
| |
| ▲ | andai 10 hours ago | parent | prev | next [-] | | Tangential but I learned today that GPT-5.5 in ChatGPT (Plus) has a smaller context window than the one in the API. (Or at least it thinks it does.) I'd guess / hope the Pro one has the full context window. | | |
| ▲ | refulgentis 9 hours ago | parent [-] | | Notably, 5.5 has a higher price on API for context > ChatGPT, and 5.5 Pro on API does not differentiate based on context size (it’s eye bleeding expensive already :) |
| |
| ▲ | vessenes 10 hours ago | parent | prev [-] | | Do not use the free plan. It is not good. |
| |
| ▲ | Someone1234 10 hours ago | parent | prev | next [-] | | Does the free plan even have access to thinking models? | | | |
| ▲ | Matticus_Rex 10 hours ago | parent | prev [-] | | Was this a surprise? |
|
|
| ▲ | ArtIntoNihonjin 9 hours ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | 10 hours ago | parent | prev [-] |
| [deleted] |