Remix.run Logo
puppystench 8 hours ago

Does this mean Claude no longer outputs the full raw reasoning, only summaries? At one point, exposing the LLM's full CoT was considered a core safety tenet.

MarkMarine 6 hours ago | parent | next [-]

Anthropic was chirping about Chinese model companies distilling Claude with the thinking traces, and then the thinking traces started to disappear. Looks like the output product and our understanding has been negatively affected but that pales in comparison with protecting the IP of the model I guess.

andai 2 hours ago | parent | next [-]

When Gemini Pro came out, I found the thinking traces to be extremely valuable. Ironically, I found them much more readable than the final output. They were a structured, logical breakdown of the problem. The final output was a big blob of prose. They removed the traces a few weeks later.

axpy906 2 hours ago | parent | prev [-]

That’s kind of funny since a Chinese model started the thinking chains being visible in Claude and OA in the first place.

fasterthanlime 8 hours ago | parent | prev | next [-]

I don't think it ever has. For a very long time now, the reasoning of Claude has been summarized by Haiku. You can tell because a lot of the times it fails, saying, "I don't see any thought needing to be summarised."

fmbb 8 hours ago | parent | next [-]

Maybe there was no thinking.

astrange 6 hours ago | parent | prev [-]

It also gets confused if the entire prompt is in a text file attachment.

And the summarizer shows the safety classifier's thinking for a second before the model thinking, so every question starts off with "thinking about the ethics of this request".

einrealist 6 hours ago | parent | prev | next [-]

They are trying to optimize the circus trick that 'reasoning' is. The economics still do not favor a viable business at these valuations or levels of cost subsidization. The amount of compute required to make 'reasoning' work or to have these incremental improvements is increasingly obfuscated in light of the IPO.

DrammBA 8 hours ago | parent | prev | next [-]

Anthropic always summarizes the reasoning output to prevent some distillation attacks

jdiff 6 hours ago | parent | next [-]

Genuine question, why have you chosen to phrase this scraping and distillation as an attack? I'm imagining you're doing it because that's how Anthropic prefers to frame it, but isn't scraping and distillation, with some minor shuffling of semantics, exactly what Anthropic and co did to obtain their own position? And would it be valid to interpret that as an attack as well?

irthomasthomas 6 hours ago | parent | next [-]

If you ask claude in chinese it thinks its deepseek.

DrammBA 5 hours ago | parent | prev | next [-]

> I'm imagining you're doing it because that's how Anthropic prefers to frame it

Correct.

> would it be valid to interpret that as an attack as well?

Yup.

fragmede 4 hours ago | parent | prev [-]

Firehosing Anthropic to exfiltrate their model seems materially different than Anthropic downloading all of the Internet to create the model in the first place to me. But maybe that's just me?

jdiff 2 hours ago | parent | next [-]

I don't see the material difference in firehosing anthropic vs anthropic firehosing random sites on the internet. As someone who runs a few of those random sites, I've had to take actions that increase my costs (and burn my time) to mitigate a new host of scrapers constantly firing at every available endpoint, even ones specifically marked as off limits.

robrenaud 3 hours ago | parent | prev [-]

Yeah, it's different. Anthropic profits when it delivers tokens. Hosting providers pay when Anthropic scrapes them.

vintermann 7 hours ago | parent | prev | next [-]

Attacks? That's a choice of words.

DrammBA 7 hours ago | parent [-]

Definitely Anthropic playing the victim after distilling the whole internet.

butlike 5 hours ago | parent | prev | next [-]

Proprietary pattern matcher proves there's no moat; promptly pre-covers other's perception.

nyc_data_geek1 7 hours ago | parent | prev | next [-]

Very cool that these companies can scrape basically all extant human knowledge, utterly disregard IP/copyright/etc, and they cry foul when the tables turn.

butlike 5 hours ago | parent | next [-]

All extant human knowledge SO FAR. Remember, by the nature of the beast, the companies will always be operating in hindsight with outdated human knowledge.

stavros 7 hours ago | parent | prev [-]

Yep, that is exactly what happens. It's a disgrace that their models aren't open, after training on everything humanity has preserved.

They should at least release the weights of their old/deprecated models, but no, that would be losing money.

copperx 3 hours ago | parent [-]

We should treat LLM somewhat like patents or drugs. After 5 years or so, the models should become open source. Or at very least the weights. To compensate for the distilling of human knowledge.

MasterScrat 7 hours ago | parent | prev [-]

and so does OpenAI

blazespin 7 hours ago | parent | prev | next [-]

Safety versus Distillation, guess we see what's more important.

andrepd 7 hours ago | parent | prev [-]

CoT is basically bullshit, entirely confabulated and not related to any "thought process"...

clbrmbr a minute ago | parent [-]

But still CoT distillation WORKS. See the DeepSeek R1 paper.