Remix.run Logo
brendoelfrendo 3 days ago

Bad news: it doesn't seem to work as well as you might think: https://arxiv.org/pdf/2508.01191

As one might expect, because the AI isn't actually thinking, it's just spending more tokens on the problem. This sometimes leads to the desired outcome but the phenomenon is very brittle and disappears when the AI is pushed outside the bounds of its training.

To quote their discussion, "CoT is not a mechanism for genuine logical inference but rather a sophisticated form of structured pattern matching, fundamentally bounded by the data distribution seen during training. When pushed even slightly beyond this distribution, its performance degrades significantly, exposing the superficial nature of the “reasoning” it produces."

hodgehog11 3 days ago | parent | next [-]

I keep wondering whether people have actually examined how this work draws its conclusions before citing it.

This is science at its worst, where you start at an inflammatory conclusion and work backwards. There is nothing particularly novel presented here, especially not in the mathematics; obviously performance will degrade on out-of-distribution tasks (and will do so for humans under the same formulation), but the real question is how out-of-distribution a lot of tasks actually are if they can still be solved with CoT. Yes, if you restrict the dataset, then it will perform poorly. But humans already have a pretty large visual dataset to pull from, so what are we comparing to here? How do tiny language models trained on small amounts of data demonstrate fundamental limitations?

I'm eager to see more works showing the limitations of LLM reasoning, both at small and large scale, but this ain't it. Others have already supplied similar critiques, so let's please stop sharing this one around without the grain of salt.

ipaddr 3 days ago | parent [-]

"This is science at its worst, where you start at an inflammatory conclusion and work backwards"

Science starts with a guess and you run experiments to test.

hodgehog11 3 days ago | parent [-]

True, but the experiments are engineered to give results they want. It's a mathematical certainty that the performance will drop off here, but is not an accurate assessment of what is going on at scale. If you present an appropriately large and well-trained model with in-context patterns, it often does a decent job, even when it isn't trained on them. By nerfing the model (4 layers), the conclusion is foregone.

I honestly wish this paper actually showed what it claims, since it is a significant open problem to understand CoT reasoning relative to the underlying training set.

lossolo 3 days ago | parent [-]

Without a provable hold out, claim that "large models do fine on unseen patterns" is unfalsifiable. In controlled from scratch training, CoT performance collapses under modest distribution shift, even with plausible chains. If you have results where the transformation family is provably excluded from training and a large model still shows robust CoT, please share them. Otherwise this paper’s claim stands for the regime it tests.

simianwords 3 days ago | parent | next [-]

I don't buy this for the simple fact that benchmarks show much better performance on thinking than on non thinking models. Benchmarks already consider the generalisation and "unseen patterns" aspect.

What would be your argument against

1. COT models performing way better in benchmarks than normal models

2. people choose to use the COT models in day to day life because they actually find that it gives better performance

buildbot 3 days ago | parent | prev | next [-]

This paper's claim holds - for 4 layer models. Models improve on out of context examples dramatically at larger scales.

hodgehog11 3 days ago | parent | prev [-]

> claim that "large models do fine on unseen patterns" is unfalsifiable

I know what you're saying here, and I know it is primarily a critique of my phrasing, but establishing something like this is the objective of in-context learning theory and mathematical applications of deep learning. It is possible to prove that sufficiently well-trained models will generalize for certain unseen classes of patterns, e.g. transformer acting like gradient descent. There is still a long way to go in the theory---it is difficult research!

> performance collapses under modest distribution shift

The problem is that the notion of "modest" depends on the scale here. With enough varied data and/or enough parameters, what was once out-of-distribution can become in-distribution. The paper is purposely ignorant of this fact. Yes, the claims hold for tiny models, but I don't think anyone ever doubted this.

razodactyl 3 days ago | parent | prev | next [-]

A viable consideration is that the models will hone in on and reinforce an incorrect answer - a natural side effect of the LLM technology wanting to push certain answers higher in probability and repeat anything in context.

Regardless of being in conversation or thinking context this doesn't prevent the model from speaking the wrong answer so the paper on the illusion of thinking makes sense.

What actually seems to be happening is a form of conversational prompting. Of course with the right conversation back and forth with an LLM you can inject knowledge in a way that causes the natural distribution to shift (again - side effect of the LLM tech.) but by itself it won't naturally get the answer perfect every time.

If this extended thinking were actually working you would expect the LLM to be able to logically conclude an answer with very high accuracy 100% of the time which it does not.

dcre 3 days ago | parent | prev | next [-]

The other commenter is more articulate, but you simply cannot draw the conclusion from this paper that reasoning models don't work well. They trained tiny little models and showed they don't work. Big surprise! Meanwhile every other piece of evidence available shows that reasoning models are more reliable at sophisticated problems. Just a few examples.

- https://arcprize.org/leaderboard

- https://aider.chat/docs/leaderboards/

- https://arstechnica.com/ai/2025/07/google-deepmind-earns-gol...

Surely the IMO problems weren't "within the bounds" of Gemini's training data.

robrenaud 3 days ago | parent [-]

The Gemini IMO result used a specifically fine tuned model for math.

Certainly they weren't training on the unreleased problems. Defining out of distribution gets tricky.

simianwords 3 days ago | parent | next [-]

>The Gemini IMO result used a specifically fine tuned model for math.

This is false.

https://x.com/YiTayML/status/1947350087941951596

This is false even for the OpenAI model

https://x.com/polynoamial/status/1946478250974200272

"Typically for these AI results, like in Go/Dota/Poker/Diplomacy, researchers spend years making an AI that masters one narrow domain and does little else. But this isn’t an IMO-specific model. It’s a reasoning LLM that incorporates new experimental general-purpose techniques."

Workaccount2 3 days ago | parent | prev [-]

Every human taking that exam has fine tuned for math, specifically on IMO problems.

simianwords 3 days ago | parent | prev | next [-]

This is not the slam dunk you think it is. Thinking longer genuinely provides better accuracy. Sure there are decreasing returns to increasing thinking tokens.

GPT 5 fast gets many things wrong but switching to the thinking model fixes the issues very often.

p1esk 2 days ago | parent | prev [-]

They experimented with gpt-2 scale models. Hard to make any meaningful conclusions in the gpt-5 era.