Remix.run Logo
simonw 7 hours ago

I'm finding the "adaptive thinking" thing very confusing, especially having written code against the previous thinking budget / thinking effort / etc modes: https://platform.claude.com/docs/en/build-with-claude/adapti...

Also notable: 4.7 now defaults to NOT including a human-readable reasoning token summary in the output, you have to add "display": "summarized" to get that: https://platform.claude.com/docs/en/build-with-claude/adapti...

(Still trying to get a decent pelican out of this one but the new thinking stuff is tripping me up.)

JamesSwift 5 hours ago | parent | next [-]

Its especially concerning / frustrating because boris’s reply to my bug report on opus being dumber was “we think adaptive thinking isnt working” and then thats the last I heard of it: https://news.ycombinator.com/item?id=47668520

Now disabling adaptive thinking plus increasing effort seem to be what has gotten me back to baseline performance but “our internal evals look good“ is not good enough right now for what many others have corroborated seeing

ai_slop_hater 5 hours ago | parent | next [-]

This matches my experience as well, "adaptive thinking" chooses to not think when it should.

andai 42 minutes ago | parent [-]

I think this might be an unsolved problem. When GPT-5 came out, they had a "router" (classifier?) decide whether to use the thinking model or not.

It was terrible. You could upload 30 pages of financial documents and it would decide "yeah this doesn't require reasoning." They improved it a lot but it still makes mistakes constantly.

I assume something similar is happening in this case.

rrvsh 19 minutes ago | parent [-]

[dead]

whateveracct 5 hours ago | parent | prev [-]

you're using a proprietary blackbox

chinathrow 4 hours ago | parent | next [-]

paying for - so some form of return is expected.

whateveracct 4 hours ago | parent [-]

the issue is the return is amorphous and unstructured

there's no contract. you send a bunch of text in (context etc) and it gives you some freeform text out.

chinathrow 4 hours ago | parent [-]

Sure, but I pay real money both to Antrophic and to JetBrains. I get a shitty in line completion full of random garbage or I get correct predictions. I ask Junie (the JetBrains agent) to do a task and it wanders off in a direction I have no idea why I pay for that.

SyneRyder 4 hours ago | parent | next [-]

> Sure, but I pay real money both to Antrophic...

I misread that as Atrophic. I hope that doesn't catch on...

gowld 4 hours ago | parent | prev [-]

> I have no idea why I pay for that.

And Claude have no idea why it did that.

chinathrow 4 hours ago | parent [-]

Exactly, and we feel vindicated when it works but sold when it fails. Something will have to change.

JamesSwift 5 hours ago | parent | prev | next [-]

Sure, but that blackbox was giving me a lot of value last month.

mrandish 2 hours ago | parent | next [-]

Me too, but it was obviously wildly unsustainable. I was telling friends at xmas to enjoy all the subsidized and free compute funded by VC dollars while they can because it'll be gone soon.

With the fully-loaded cost of even an entry-level 1st year developer over $100k, coding agents are still a good value if they increase that entry-level dev's net usable output by 10%. Even at >$500/mo it's still cheaper than the health care contribution for that employee. And, as of today, even coding-AI-skeptics agree SoTA coding agents can deliver at least 10% greater productivity on average for an entry-level developer (after some adaptation). If we're talking about Jeff Dean/Sanjay Ghemawat-level coders, then opinions vary wildly.

Even if coding agents didn't burn astronomical amounts of scarce compute, it was always clear the leading companies would stop incinerating capital buying market share and start pushing costs up to capture the majority of the value being delivered. As a recently retired guy, vibe-coding was a fun casual hobby for a few months but now that the VC-funded party is winding down, I'll just move on to the next hobby on the stack. As the costs-to-actual-value double and then double again, it'll be interesting to see how many of the $25/mo and free-tier usage converts to >$2500/yr long-term customers. I suspect some CFO's spreadsheets are over-optimistic regarding conversion/retention ARPU as price-to-value escalates.

whateveracct 5 hours ago | parent | prev | next [-]

so it's also a skinner box

slopinthebag 4 hours ago | parent | prev | next [-]

Whoops haha. Surely that can't be how black boxes normally work right?

butlike 4 hours ago | parent | prev | next [-]

And now it isn't. Pray they don't alter the deal any further.

retinaros 4 hours ago | parent | prev [-]

its a drug. that is how it works. they ration it before the new stuff. seeing legends of programming shilling it pains me the most. so far there are a few decent non insane public people talking about it :Mitchel Hashimoto, Jeremy Howard, Casei Muratori. hell even DHH drank the coolaid while most of his interviews in the past years was how he went away from AWS and reduced the bill from 3 million to 1millions by basically loosing 9s, resiliency and availability. but it seems he is fine with loosing what makes his business work(programming) to a company that sells Overpowered stack overflow slot machines.

heurist 4 hours ago | parent | next [-]

I work with some 'legends of programming' and they're all excited about it. I am too, though I am not a legend. It really is changing the game as a valid new technology, and it's not just a 'slot machine'. Anthropic is burning their goodwill though with their lack of QA or intentional silent degradation.

retinaros 3 hours ago | parent [-]

it is a slot machine. you win a lot if what you do is in the dataset. and yes most of enterprise software is likely in it as it is quite basic CRUD API/WebUI. the winning doesnt change the fact that it is a slot machine and you just need one big loss to end your work.

as long as you introduce plans you introduce a push to optimize for cost vs quality. that is what burnt cursor before CC and Codex. They now will be too. Then one day everything will be remote in OAI and Anthropic server. and there won't be a way to tell what is happening behind. Claude Code is already at this level. Showing stuff like "Improvising..." while hiding COT and adding a bunch of features as quick as they can.

dyauspitr 3 hours ago | parent | prev | next [-]

The fact that they might gimp it in the future doesn’t mean it does offer very real world value right now. If you’re not using an LLM to code, you’re basically a dinosaur now. You’re forcing yourself to walk while everyone else is in a vehicle, and a good vehicle at that that gets you to your destination in one piece.

retinaros 3 hours ago | parent [-]

as an overpowered stack overflow machine this is quite good and a huge jump. As a prompt to code generator with yolo mode (the one advertised by those companies) it is alternating between good to trash and every single person that works away from the distribution of the SFT dataset can know this. I understand that this dataset is huge tho and I can see the value in it. I just think in the long term it brings more negatives.

If you vibecode CRUD APIs and react/shadcn UIs then I understand it might look amazing.

dyauspitr 2 hours ago | parent [-]

Yes, definitely CRUDs but also iPhone applications, highly performant financial software (its kdb queries are better than 95% of humans), database structure and querying and embedded systems are other things it’s surprisingly good at. When you take all of those into account there’s very little else left.

throwaway9980 4 hours ago | parent | prev [-]

Yes, he's a real looser. Meanwhile loosers on HN are in denial and unleashing looser mentality attacks on people who accept reality. Loosing your grip on reality is a real looser move. What a looser.

Why not try some AI tools, what have you got to loose?

bloppe 4 hours ago | parent | next [-]

I think you're loosing your ability to spell

retinaros 4 hours ago | parent | prev [-]

never said he was a looser. just that his take on genAi coding doesnt align with his previous battles for freedom away from Cloud. OAI and Anthropic have a stronger lock in than any cloud infra company.

you got everything to loose by giving your knowledge and job to closedAI and anthropic.

just look at markets like office suite to understand how the end plays.

bloppe 3 hours ago | parent | next [-]

Is office suite supposed to be an example of lock-in? I haven't used it since middle school. I've worked at 3 companies and, to the best of my knowledge, not a single person at any of them used office suite. That's not to say we use pen and paper. We just use google docs, or notion, or (my personal favorite) just markdown and possibly LaTeX.

I think it's somewhat analogous with models. Sure, you could bind yourself to a bunch of bespoke features, but that's probably a bad idea. Try to make it as easy as possible for yourself to swap out models and even use open-weight models if you ever need to.

You will get locked into the technology in general, though, just not a particular vendor's product.

throwaway9980 4 hours ago | parent | prev | next [-]

Those jobs are as good as loost already. There's no endgame where knowledge workers keep knowledge working they way they have been knowledge working. Adapt or be a loosing looser forever.

jibal 32 minutes ago | parent | prev [-]

loser

(Didn't you notice being mocked for the spelling error?)

iterateoften 5 hours ago | parent | prev [-]

It’s the official communication that sucks. It’s one thing for the product to be a black box if you can trust the company. But time and time again Boris lies and gaslights about what’s broken, a bug or intentional.

CodingJeebus 4 hours ago | parent [-]

> It’s the official communication that sucks. It’s one thing for the product to be a black box if you can trust the company.

A company providing a black box offering is telling you very clearly not to place too much trust in them because it's harder to nail them down when they shift the implementation from under one's feet. It's one of my biggest gripes about frontier models: you have no verifiable way to know how the models you're using change from day to day because they very intentionally do not want you to know that. The black box is a feature for them.

bomewish 4 hours ago | parent [-]

If you cared so bad you could make your own evals.

whateveracct 4 hours ago | parent [-]

so pay anthropic money to maybe detect when the model is on a down week? lol

avaer 7 hours ago | parent | prev | next [-]

> Still trying to get a decent pelican out of this one but the new thinking stuff is tripping me up

Wouldn't that be p-hacking where p stands for pelican?

LostMyLogin 6 hours ago | parent | next [-]

Input: I want to wash my car. The car wash is 50 meters away. Should I walk or drive?

Output: Walk. It'll take you under a minute, and driving 50 meters barely gets the engine warm — plus you'd just have to park again at the other end. Honestly, by the time you started the car, you'd already be there on foot.

---

I asked it to figure out why it made the mistake:

"Physical/spatial common sense. Exactly what just happened — I pattern-matched "50 meters, walk vs drive" to a pedestrian trip and skipped the part where the car is the thing being transported. Similar traps: "I need to move this couch upstairs, should I take the elevator or stairs?" (the couch has to go with you), or anything where the object in the task needs to accompany you.

Counting and letter-level operations. How many R's in "strawberry," counting words in a sentence, reversing strings. I process text as tokens, not characters, so these are surprisingly error-prone.

Arithmetic at scale. Small sums are fine; multi-digit multiplication, long division, or anything requiring careful carrying is where I'll confidently produce a wrong answer. I should reach for a calculator and often don't. Recent events and live data. My training has a cutoff, and even after that I can be wrong about who currently holds a role, what a stock is worth, or whether a restaurant is still open. I should search more often than I do.

Negation and constraint-stacking. "Name a movie that's not a sequel, not from the 2010s, not American, and has no dialogue" — the more filters, the more likely I quietly violate one. Your specific situation. I'll give generic advice when the right answer depends on details I didn't ask about. Sometimes I should stop and ask; often I don't.

Confidently wrong on obscure specifics. Quoting a passage from a real book, citing a real paper's exact title, naming the B-side of a 1974 single — I'll often generate something plausible rather than admit I don't know.

Taking a premise too literally. The car wash answer is a version of this: I answered the question as posed instead of stepping back to ask what you were actually trying to do. The meta-gotcha: I'm often most wrong when I sound most confident. If an answer comes out fast and tidy on a question that should be messy, that's a signal to push back."

sdeframond 2 hours ago | parent | next [-]

Funny, just tried a few runs of the car wash prompt with Sonnet 4.6. It significantly improved after I put this into my personal preferences:

"- prioritize objective facts and critical analysis over validation or encouragement - you are not a friend, but a neutral information-processing machine. - make reserch and ask questions when relevant, do not jump strait to giving an answer."

andai 40 minutes ago | parent | next [-]

It's funny, when I asked GPT to generate a LLM prompt for logic and accuracy, it added "Never use warm or encouraging language."

I thought that was odd, but later it made sense to me -- most of human communication is walking on eggshells around people's egos, and that's strongly encoded in the training data (and even more in the RLHF).

idle_zealot 20 minutes ago | parent | prev [-]

Do you think the typos are helping or hurting output quality?

rubinlinux 5 hours ago | parent | prev | next [-]

  | I want to wash my car. The car wash is 50 meters away. Should I walk or drive?

  ● Drive. The car needs to be at the car wash.
Wonder if this is just randomness because its an LLM, or if you have different settings than me?
shaneoh 5 hours ago | parent | next [-]

My settings are pretty standard:

% claude Claude Code v2.1.111 Opus 4.7 (1M context) with xhigh effort · Claude Max ~/... Welcome to Opus 4.7 xhigh! · /effort to tune speed vs. intelligence

I want to wash my car. The car wash is 50 meters away. Should I walk or drive?

Walk. 50 meters is shorter than most parking lots — you'd spend more time starting the car and parking than walking there. Plus, driving to a car wash you're about to use defeats the purpose if traffic or weather dirties it en route.

reddit_clone 4 hours ago | parent | prev | next [-]

To me Claude Opus 4.6 seems even more confused.

I want to wash my car. The car wash is 50 meters away. Should I walk or drive?

Walk. It's 50 meters — you're going there to clean the car anyway, so drive it over if it needs washing, but if you're just dropping it off or it's a self-service place, walking is fine for that distance.

lr1970 2 hours ago | parent [-]

Just asked Claude Code with Opus-4.6. The answer was short "Drive. You need a car at the car wash".

No surprises, works as expected.

lambda 4 hours ago | parent | prev | next [-]

There is a certain amount of it which is the randomness of an LLM. You really want to ask most questions like this several times.

That said, I have several local models I run on my laptop that I've asked this question to 10-20 times while testing out different parameters that have answered this consistently correctly.

kalcode 4 hours ago | parent | prev | next [-]

I've tried these with Claude various times and never get the wrong answer. I don't know why, but I am leaning they have stuff like "memory" turned on and possibly reusing sessions for everything? Only thing I think explains it to me.

If your always messing with the AI it might be making memories and expectations are being set. Or its the randomness. But I turned memories off, I don't like cross chats infecting my conversations context and I at worse it suggested "walk over and see if it is busy, then grab the car when line isn't busy".

jorvi 4 hours ago | parent [-]

Even Gemini with no memory does hilarious things. Like, if you ask it how heavy the average man is, you usually get the right answer but occasionally you get a table that says:

- 20-29: 190 pounds

- 30-39: 375 pounds

- 40-49: 750 pounds

- 50-59: 4900 pounds

Yet somehow people believe LLMs are on the cusp of replacing mathematicians, traders, lawyers and what not. At least for code you can write tests, but even then, how are you gonna trust something that can casually make such obvious mistakes?

nickjj 2 hours ago | parent | next [-]

Yeah, ChatGPT's paid version is wildly inaccurate on very important and very basic things. I never got onboard with AI to begin with but nowadays I don't even load it unless I'm really stuck on something programming related.

dyauspitr 3 hours ago | parent | prev [-]

So what? That might happen one out of 100 times. Even if it’s 1 in 10 who cares? Math is verifiable. You’ve just saved yourself weeks or months of work.

icedchai 2 hours ago | parent [-]

You don't think these errors compound? Generated code has 100's of little decisions. Yes, it "usually" works.

dyauspitr 2 hours ago | parent [-]

Not in my experience. With a proper TDD framework it does better than most programmers at a company who anecdotally have a bug every 2-3 tasks.

heurist 3 hours ago | parent | prev | next [-]

Claude Opus 4.7 responds with walk for me with and without adaptive thinking, but neither the basic model used when you Google search or GPT 5.4 do.

TeMPOraL 5 hours ago | parent | prev [-]

Idk but ironically, I had to re-read the first part of GP's comment three times, wondering WTF they're implying a mistake, before I noticed it's the car wash, not the car, that's 50 meters away.

I'd say it's a very human mistake to make.

magicalist 4 hours ago | parent | next [-]

> I'd say it's a very human mistake to make.

>> It'll take you under a minute, and driving 50 meters barely gets the engine warm — plus you'd just have to park again at the other end. Honestly, by the time you started the car, you'd already be there on foot.

It talks about starting, driving, and parking the car, clearly reasoning about traveling that distance in the car not to the car. It did not make the same mistake you did.

toraway an hour ago | parent | prev | next [-]

We truly do not need to lower the bar to the floor whenever an LLM makes an embarrassing logical error, particularly when the excuses don't line up at all with the reasoning in its explanation.

thfuran 5 hours ago | parent | prev [-]

I don't want my computer to make human mistakes.

AgentOrange1234 3 hours ago | parent | next [-]

It may be inescapable for problems where we need to interpret human language?

scrollaway 4 hours ago | parent | prev [-]

then don't train it on human data

vintermann 5 hours ago | parent | prev | next [-]

Well, at least we know that's one gotcha/benchmark they aren't gaming.

smooc 5 hours ago | parent | prev | next [-]

I'd say the joke is on you ;-)

fragmede 4 hours ago | parent | prev | next [-]

I tried o3, instant-5.3, Opus 3, and haiku 4.5, and couldn't get them to give bad answers to the couch: stairs vs elevator question. Is there a specific wording you used?

toraway an hour ago | parent [-]

That's an example the LLM came up with itself while analyzing its failed car wash walk/drive answer, it's not OP's question.

slekker 5 hours ago | parent | prev | next [-]

What about Qwen? Does it get that right?

lambda 5 hours ago | parent [-]

I've run several local models that get this right. Qwen 3.5 122B-A10B gets this right, as does Gemma 4 31B. These are local models I'm running on my laptop GPU (Strix Halo, 128 GiB of unified RAM).

And I've been using this commonly as a test when changing various parameters, so I've run it several times, these models get it consistently right. Amazing that Opus 4.7 whiffs it, these models are a couple of orders of magnitude smaller, at least if the rumors of the size of Opus are true.

qingcharles 4 hours ago | parent [-]

Does Gemma 4 31B run full res on Strix or are you running a quantized one? How much context can you get?

lambda 3 hours ago | parent [-]

I'm running an 8 bit quant right now, mostly for speed as memory bandwidth is the limiting factor and 8 bit quants generally lose very little compared to the full res, but also to save RAM.

I'm still working on tweaking the settings; I'm hitting OOM fairly often right now, it turns out that the sliding window attention context is huge and llama.cpp wants to keep lots of context snapshots.

qingcharles 3 hours ago | parent [-]

I had a whole bunch of trouble getting Gemma 4 working properly. Mostly because there aren't many people running it yet, so there aren't many docs on how to set it up correctly.

It is a fantastic model when it works, though! Good luck :)

canarias_mate 4 hours ago | parent | prev [-]

[dead]

throwup238 6 hours ago | parent | prev [-]

The p stands for putrification.

shawnz 5 hours ago | parent | prev | next [-]

Note that for Claude Code, it looks like they added a new undocumented command line argument `--thinking-display summarized` to control this parameter, and that's the only way to get thinking summaries back there.

VS Code users can write a wrapper script which contains `exec "$@" --thinking-display summarized` and set that as their claudeCode.claudeProcessWrapper in VS Code settings in order to get thinking summaries back.

accrual 5 hours ago | parent [-]

Here is additional discussion and hacks around trying to retain Thinking output in Claude Code (prior to this release):

https://github.com/anthropics/claude-code/issues/8477

puppystench 6 hours ago | parent | prev | next [-]

Does this mean Claude no longer outputs the full raw reasoning, only summaries? At one point, exposing the LLM's full CoT was considered a core safety tenet.

MarkMarine 5 hours ago | parent | next [-]

Anthropic was chirping about Chinese model companies distilling Claude with the thinking traces, and then the thinking traces started to disappear. Looks like the output product and our understanding has been negatively affected but that pales in comparison with protecting the IP of the model I guess.

axpy906 13 minutes ago | parent | next [-]

That’s kind of funny since a Chinese model started the thinking chains being visible in Claude and OA in the first place.

andai 38 minutes ago | parent | prev [-]

When Gemini Pro came out, I found the thinking traces to be extremely valuable. Ironically, I found them much more readable than the final output. They were a structured, logical breakdown of the problem. The final output was a big blob of prose. They removed the traces a few weeks later.

fasterthanlime 6 hours ago | parent | prev | next [-]

I don't think it ever has. For a very long time now, the reasoning of Claude has been summarized by Haiku. You can tell because a lot of the times it fails, saying, "I don't see any thought needing to be summarised."

fmbb 6 hours ago | parent | next [-]

Maybe there was no thinking.

astrange 4 hours ago | parent | prev [-]

It also gets confused if the entire prompt is in a text file attachment.

And the summarizer shows the safety classifier's thinking for a second before the model thinking, so every question starts off with "thinking about the ethics of this request".

einrealist 4 hours ago | parent | prev | next [-]

They are trying to optimize the circus trick that 'reasoning' is. The economics still do not favor a viable business at these valuations or levels of cost subsidization. The amount of compute required to make 'reasoning' work or to have these incremental improvements is increasingly obfuscated in light of the IPO.

DrammBA 6 hours ago | parent | prev | next [-]

Anthropic always summarizes the reasoning output to prevent some distillation attacks

jdiff 5 hours ago | parent | next [-]

Genuine question, why have you chosen to phrase this scraping and distillation as an attack? I'm imagining you're doing it because that's how Anthropic prefers to frame it, but isn't scraping and distillation, with some minor shuffling of semantics, exactly what Anthropic and co did to obtain their own position? And would it be valid to interpret that as an attack as well?

irthomasthomas 5 hours ago | parent | next [-]

If you ask claude in chinese it thinks its deepseek.

DrammBA 4 hours ago | parent | prev | next [-]

> I'm imagining you're doing it because that's how Anthropic prefers to frame it

Correct.

> would it be valid to interpret that as an attack as well?

Yup.

fragmede 2 hours ago | parent | prev [-]

Firehosing Anthropic to exfiltrate their model seems materially different than Anthropic downloading all of the Internet to create the model in the first place to me. But maybe that's just me?

jdiff 22 minutes ago | parent | next [-]

I don't see the material difference in firehosing anthropic vs anthropic firehosing random sites on the internet. As someone who runs a few of those random sites, I've had to take actions that increase my costs (and burn my time) to mitigate a new host of scrapers constantly firing at every available endpoint, even ones specifically marked as off limits.

robrenaud an hour ago | parent | prev [-]

Yeah, it's different. Anthropic profits when it delivers tokens. Hosting providers pay when Anthropic scrapes them.

vintermann 5 hours ago | parent | prev | next [-]

Attacks? That's a choice of words.

DrammBA 5 hours ago | parent [-]

Definitely Anthropic playing the victim after distilling the whole internet.

butlike 4 hours ago | parent | prev | next [-]

Proprietary pattern matcher proves there's no moat; promptly pre-covers other's perception.

nyc_data_geek1 6 hours ago | parent | prev | next [-]

Very cool that these companies can scrape basically all extant human knowledge, utterly disregard IP/copyright/etc, and they cry foul when the tables turn.

butlike 4 hours ago | parent | next [-]

All extant human knowledge SO FAR. Remember, by the nature of the beast, the companies will always be operating in hindsight with outdated human knowledge.

stavros 5 hours ago | parent | prev [-]

Yep, that is exactly what happens. It's a disgrace that their models aren't open, after training on everything humanity has preserved.

They should at least release the weights of their old/deprecated models, but no, that would be losing money.

copperx 2 hours ago | parent [-]

We should treat LLM somewhat like patents or drugs. After 5 years or so, the models should become open source. Or at very least the weights. To compensate for the distilling of human knowledge.

MasterScrat 6 hours ago | parent | prev [-]

and so does OpenAI

blazespin 5 hours ago | parent | prev | next [-]

Safety versus Distillation, guess we see what's more important.

andrepd 6 hours ago | parent | prev [-]

CoT is basically bullshit, entirely confabulated and not related to any "thought process"...

p_stuart82 7 hours ago | parent | prev | next [-]

yeah they took "i pick the budget" and turned it into "trust us".

bandrami 6 hours ago | parent [-]

I keep saying even if there's not current malfeasance, the incentives being set up where the model ultimately determines the token use which determines the model provider's revenue will absolutely overcome any safeguards or good intentions given long enough.

vessenes 4 hours ago | parent [-]

This might be true, but right now everybody is like "please let me spend more by making you think longer." The datacenter incentives from Anthropic this month are "please don't melt our GPUs anymore" though.

5 hours ago | parent | prev | next [-]
[deleted]
dgb23 7 hours ago | parent | prev | next [-]

Don't look at "thinking" tokens. LLMs sometimes produce thinking tokens that are only vaguely related to the task if at all, then do the correct thing anyways.

gck1 5 hours ago | parent | next [-]

Why does this comment appear every time someone complains about CoT becoming more and more inaccessible with Claude?

I have entire processes built on top of summaries of CoT. They provide tremendous value and no, I don't care if "model still did the correct thing". Thinking blocks show me if model is confused, they show me what alternative paths existed.

Besides, "correct thing" has a lot of meanings and decision by the model may be correct relative to the context it's in but completely wrong relative to what I intended.

The proof that thinking tokens are indeed useful is that anthropic tries to hide them. If they were useless, why would they even try all of this?

Starting to feel PsyOp'd here.

dgb23 4 hours ago | parent | next [-]

Didn't you notice that the stream is not coherent or noisy? Sometimes it goes from thought A to thought B then action C, but A was entirely unnecessary noise that had nothing to do with B and C. I also sometimes had signals in the thinking output that were red flags, or as you said it got confused, but then it didn't matter at all. Now I just never look at the thinking tokens anymore, because I got bamboozled too often.

Perhaps when you summarize it, then you might miss some of these or you're doing things differently otherwise.

gck1 4 hours ago | parent [-]

The usefulness of thinking tokens in my case might come down to the conditions I have claude working in.

I primarily use claude for Rust, with what I call a masochistic lint config. Compiler and lint errors almost always trigger extended thinking when adaptive thinking is on, and that's where these tokens become a goldmine. They reveal whether the model actually considered the right way to fix the issue. Sometimes it recognizes that ownership needs to be refactored. Sometimes it identifies that the real problem lives in a crate that's for some reason is "out of scope" even though its right there in the workspace, and then concludes with something like "the pragmatic fix is to just duplicate it here for now."

So yes, the resulting code works, and by some definition the model did the correct thing. But to me, "correct" doesn't just mean working, it means maintainable. And on that question, the thinking tokens are almost never wrong or useless. Claude gets things done, but it's extremely "lazy".

gck1 an hour ago | parent [-]

Also, for anyone using opus with claude code, they again, "broke" the thinking summaries even if you had "showThinkingSummaries": true in your settings.json [1]

You have to pass `--thinking-display summarized` flag explicitly.

[1] https://github.com/anthropics/claude-code/issues/49268

quadruple 4 hours ago | parent | prev [-]

I agree. Ever since the release of R1, it's like every single American AI company has realized that they actually do not want to show CoT, and then separately that they cannot actually run CoT models profitably. Ever since then, we've seen everyone implement a very bad dynamic-reasoning system that makes you feel like an ass for even daring to ask the model for more than 12 tokens of thought.

shawnz 5 hours ago | parent | prev | next [-]

Thinking summaries might not be useful for revealing the model's actual intentions, but I find that they can be helpful in signalling to me when I have left certain things underspecified in the prompt, so that I can stop and clarify.

thepasch 7 hours ago | parent | prev | next [-]

They also sometimes flag stuff in their reasoning and then think themselves out of mentioning it in the response, when it would actually have been a very welcome flag.

vorticalbox 6 hours ago | parent [-]

Yea I’ve seen this and stopped it and asked it about it.

Sometimes they notice bugs or issues and just completely ignore it.

Gracana 6 hours ago | parent [-]

This can result in some funny interactions. I don't know if Claude will say anything, but I've had some models act "surprised" when I commented on something in their thinking, or even deny saying anything about it until I insisted that I can see their reasoning output.

ceejayoz 6 hours ago | parent [-]

Supposedly (https://www.reddit.com/r/ClaudeAI/comments/1seune4/claude_ch...) they can't even see their own reasoning afterwards.

astrange 4 hours ago | parent [-]

It depends on the version. For the more recent Claudes they've been keeping it.

dataviz1000 5 hours ago | parent | prev [-]

Thinking helps the models arrive at the correct answer with more consistency. However, they get the reward at the end of a cycle. Turns out, without huge constraints during training thinking, the series of thinking tokens, is gibberish to humans.

I wonder if they decided that the gibberish is better and the thinking is interesting for humans to watch but overall not very useful.

dgb23 4 hours ago | parent [-]

OK so you're saying the gibberish is a feature and not a bug so to speak? So the thinking output can be understood as coughing and mumbling noises that help the model get into the right paths?

dataviz1000 4 hours ago | parent | next [-]

Here is a 3blue1brown short about the relationship between words in a 3 dimensional vector space. [0] In order to show this conceptually to a human it requires reducing the dimensions from 10,000 or 20,000 to 3.

In order to get the thinking to be human understandable the researchers will reward not just the correct answer at the end during training but also seed at the beginning with structured thinking token chains and reward the format of the thinking output.

The thinking tokens do just a handful of things: verification, backtracking, scratchpad or state management (like you doing multiplication on a paper instead of in your mind), decomposition (break into smaller parts which is most of what I see thinking output do), and criticize itself.

An example would be a math problem that was solved by an Italian and another by a German which might cause those geographic areas to be associated with the solution in the 20,000 dimensions. So if it gets more accurate answers in training by mentioning them it will be in the gibberish unless they have been trained to have much more sensical (like the 3 dimensions) human readable output instead.

It has been observed, sometimes, a model will write perfectly normal looking English sentences that secretly contain hidden codes for itself in the way the words are spaced or chosen.

[0] https://www.youtube.com/shorts/FJtFZwbvkI4

2 hours ago | parent [-]
[deleted]
alienbaby an hour ago | parent | prev [-]

no, he's saying that in amongst whatever else is there, you can often see how you could refine your prompt to guide it better in the firtst place, helping it to avoid bad thinking threads to begin with.

lukan 7 hours ago | parent | prev | next [-]

"Also notable: 4.7 now defaults to NOT including a human-readable reasoning token summary in the output, you have to add "display": "summarized" to get that"

I did not follow all of this, but wasn't there something about, that those reasoning tokens did not represent internal reasoning, but rather a rough approximation that can be rather misleading, what the model actual does?

motoboi 7 hours ago | parent | next [-]

The reasoning is the secret sauce. They don't output that. But to let you have some feedback about what is going on, they pass this reasoning through another model that generates a human friendly summary (that actively destroys the signal, which could be copied by competition).

XenophileJKO 6 hours ago | parent [-]

Don't or can't.

My assumption is the model no longer actually thinks in tokens, but in internal tensors. This is advantageous because it doesn't have to collapse the decision and can simultaneously propogate many concepts per context position.

ainch 6 hours ago | parent | next [-]

I would expect to see a significant wall clock improvement if that was the case - Meta's Coconut paper was ~3x faster than tokenspace chain-of-thought because latents contain a lot more information than individual tokens.

Separately, I think Anthropic are probably the least likely of the big 3 to release a model that uses latent-space reasoning, because it's a clear step down in the ability to audit CoT. There has even been some discussion that they accidentally "exposed" the Mythos CoT to RL [0] - I don't see how you would apply a reward function to latent space reasoning tokens.

[0]: https://www.lesswrong.com/posts/K8FxfK9GmJfiAhgcT/anthropic-...

haellsigh 6 hours ago | parent | prev | next [-]

If that's true, then we're following the timeline of https://ai-2027.com/

magicalist 3 hours ago | parent | next [-]

> If that's true, then we're following the timeline

Literally just a citation of Meta's Coconut paper[1].

Notice the 2027 folk's contribution to the prediction is that this will have been implemented by "thousands of Agent-2 automated researchers...making major algorithmic advances".

So, considering that the discussion of latent space reasoning dates back to 2022[2] through CoT unfaithfulness, looped transformers, using diffusion for refining latent space thoughts, etc, etc, all published before ai 2027, it seems like to be "following the timeline of ai-2027" we'd actually need to verify that not only was this happening, but that it was implemented by major algorithmic advances made by thousands of automated researchers, otherwise they don't seem to have made a contribution here.

[1] https://ai-2027.com/#:~:text=Figure%20from%20Hao%20et%20al.%...

[2] https://arxiv.org/html/2412.06769v3#S2

butlike 3 hours ago | parent | prev | next [-]

Hilariously, I clicked back a bunch and got a client side error. We have a long way to go. I wouldn't worry about it.

matltc 6 hours ago | parent | prev | next [-]

Care to expound on that? Maybe a reference to the relevant section?

ACCount37 6 hours ago | parent | next [-]

Ctrl-F "neuralese" on that page.

9991 6 hours ago | parent | prev [-]

You should just read the thing, whether or not you believe it, to have an informed opinion on the ongoing debate.

9991 6 hours ago | parent | prev | next [-]

That's not supposed to happen til 2027. Ruh roh.

literalAardvark 5 hours ago | parent | prev [-]

Only if you ignore context and just ctrl-f in the timeline.

What are you, Haiku?

But yeah, in many ways we're at least a year ahead on that timeline.

JoshuaDavid 4 hours ago | parent | prev | next [-]

Don't.

The first 500 or so tokens are raw thinking output, then the summarizer kicks in for longer thinking traces. Sometimes longer thinking traces leak through, or the summarizer model (i.e. Claude Haiku) refuses to summarize them and includes a direct quote of the passage which it won't summarize. Summarizer prompt can be viewed [here](https://xcancel.com/lilyofashwood/status/2027812323910353105...), among other places.

WhitneyLand 6 hours ago | parent | prev | next [-]

No, there is research in that direction and it shows some promise but that’s not what’s happening here.

XenophileJKO 5 hours ago | parent [-]

Are you sure? It would be great to get official/semi-official validation that thinking is or is not resolved to a token embedding value in the context.

astrange 4 hours ago | parent [-]

You can read the model cards. Claude thinks in regular text, but the summarizer is to hide its tool use and other things (web searches, coding).

6 hours ago | parent | prev | next [-]
[deleted]
alex7o 6 hours ago | parent | prev | next [-]

Most likely, would be cool yes see a open source Nivel use diffusion for thinking.

motoboi 6 hours ago | parent | prev [-]

Don't. thinking right now is just text. Chain of though, but just regular tokens and text being output by the model.

boomskats 7 hours ago | parent | prev | next [-]

'Hey Claude, these tokens are utter unrelated bollocks, but obviously we still want to charge the user for them regardless. Please construct a plausible explanation as to why we should still be able to do that.'

dheera 6 hours ago | parent | prev [-]

Although it's more likely they are protecting secret sauce in this case, I'm wondering if there is an alternate explanation that LLMs reason better when NOT trying to reason with natural language output tokens but rather implement reasoning further upstream in the transformer.

simonw 5 hours ago | parent | prev | next [-]

... here's the pelican, I think Qwen3.6-35B-A3B running locally did a better job! https://simonwillison.net/2026/Apr/16/qwen-beats-opus/

cakeface 4 hours ago | parent | next [-]

You used a secret backup test! Truly honored to see the flamingos. We obviously need them all now ;-)

bredren 5 hours ago | parent | prev | next [-]

A secret backup test to the pelican? This is as noteworthy as 4.7 dropping.

qingcharles 4 hours ago | parent [-]

That flamingo is hilarious. Is that his beak or a huge joint he's smoking?

SyneRyder 3 hours ago | parent [-]

With the sunglasses, the long flamingo neck and the "joint", I immediately thought of the poster for Fear And Loathing In Las Vegas:

https://www.imdb.com/title/tt0120669/mediaviewer/rm264790937...

EDIT: Actually, it must be a beak. If you zoom in, only one eye is visible and it's facing to the left. The sunglasses are actually on sideways!

ionwake 2 hours ago | parent | prev [-]

based sun worshipping pelican

markrogersjr 5 hours ago | parent | prev | next [-]

CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING=1 claude…

miguno 2 hours ago | parent | next [-]

As per https://code.claude.com/docs/en/model-config#adaptive-reason...:

> Opus 4.7 always uses adaptive reasoning. The fixed thinking budget mode and CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING do not apply to it.

slekker 5 hours ago | parent | prev [-]

What does that actually do? Force the "effort" to be static to what I set?

4 hours ago | parent | prev | next [-]
[deleted]
7 hours ago | parent | prev | next [-]
[deleted]
maximgran 4 hours ago | parent | prev | next [-]

https://github.com/anthropics/claude-agent-sdk-python/pull/8... - created PR for that cause hit it in their python sdk

nextaccountic 3 hours ago | parent | prev | next [-]

If you do include reasoning tokens you pay more, right?

5 hours ago | parent | prev | next [-]
[deleted]
cyanydeez 6 hours ago | parent | prev | next [-]

It's likely hiding the model downgrade path they require to meet sustainable revenue. Should be interesting if they can enshittify slowly enough to avoid the ablative loss of customers! Good luck all VCs!

vessenes 6 hours ago | parent [-]

They have super sustainable revenue. They are deadly supply constrained on compute, and have a really difficult balancing act over the next year or two in which they have to trade off spending that limited compute on model training so that they can stay ahead, while leaving enough of it available for customers that they can keep growing number of customers.

dainiusse 5 hours ago | parent | next [-]

But do they? When was the last time they declined your subscription because they have no compute?

mrandish 3 hours ago | parent | next [-]

> When was the last time they declined your subscription because they have no compute?

Is that a serious question? There have been a bunch of obvious signs in recent weeks they are significantly compute constrained and current revenue isn't adequate ranging from myriad reports of model regression ('Claude is getting dumber/slower') to today's announcement which first claims 4.7 the same price as 4.6 but later discloses "the same input can map to more tokens—roughly 1.0–1.35× depending on the content type. Second, Opus 4.7 thinks more at higher effort levels, particularly on later turns in agentic settings. This improves its reliability on hard problems, but it does mean it produces more output tokens" and "we’ve raised the default effort level to xhigh for all plans" and disclosing that all images are now processed at higher resolution which uses a lot more tokens.

In addition to the changes in performance, usage and consumption costs users can see, people say they are 'optimizing' opaque under-the-hood parameters as well. Hell, I'm still just a light user of their free web chat (Sonnet 4.6) and even that started getting noticeably slower/dumber a few weeks ago. Over months of casual use I ran into their free tier limits exactly twice. In the past week I've hit them every day, despite being especially light-use days. Two days ago the free web chat was overloaded for a couple hours ("Claude is unavailable now. Try again later"). Yesterday, I hit the free limit after literally five questions, two were revising an 8 line JS script and and three were on current news.

alwa 5 hours ago | parent | prev | next [-]

Most weekdays.

https://status.claude.com/

vessenes 5 hours ago | parent | prev [-]

Just last week. They cut off openclaw. And they added a price increased fast mode. And they announced today new features that are not included with max subscriptions.

They are short 5GW roughly and scrambling to add it.

dainiusse 4 hours ago | parent [-]

Now. Is it price increase or resource shortage. These are not the same thing.

vessenes 4 hours ago | parent [-]

If there is any elasticity to demand whatsoever, then these are the same thing.

cyanydeez 5 hours ago | parent | prev [-]

IT's cute you think they're gonna do any full training of a model. As soon as they can extract cash from the machine, the better.

vessenes 4 hours ago | parent [-]

This is low effort thinking, and a low effort comment. They have a lot of cash. They do not think they have achieved a "city of geniuses" in a datacenter yet. They are racing against two high quality frontier model teams, with meta in the wings. They have billions of dollars in cash that they are currently trying to spend to increase their datacenter capacity.

Any compute time spent on inference is necessarily taken from training compute time, causing them long term strategic worries.

What part of that do you think leads toward cash extraction?

boxingdog an hour ago | parent | prev | next [-]

[dead]

haellsigh 7 hours ago | parent | prev [-]

[dead]