Remix.run Logo
thomassmith65 2 hours ago

And still are gaslighting:

  We take reports about degradation very seriously. We never intentionally degrade our models [...] On March 4, we changed Claude Code's default reasoning effort from high to medium
Anthropic is the best company of its kind, but that is badly worded PR.
sobjornstad an hour ago | parent | next [-]

Is adding JPEG compression to your software “intentional degradation” of the software? I wouldn't say providing a selectable option to use a faster, cheaper version of something qualifies as “degradation”.

It is certainly true that they did a poor job communicating this change to users (I did not know that the default was “high” before they introduced it, I assumed they had added an effort level both above and below whatever the only effort choice was there before). On the other hand, I was using Claude Code a fair bit on “medium” during that time period and it seemed to be performing just fine for me (and saving usage/time over “high”), so it doesn't seem clear that that was the wrong default, if only it had been explained better.

xpe an hour ago | parent | prev [-]

To my eye, gaslighting is a serious accusation. Wikipedia's first line matches how I think of it: "Gaslighting is the manipulation of someone into questioning their perception of reality."

Did I miss something? I'm only looking at primary sources to start. Not Reddit. Not The Register. Official company communications.

Did Anthropic tell users i.e. "you are wrong, your experience is not worse."? If so, that would reach the bar of gaslighting, as I understand it (and I'm not alone). If you have a different understanding, please share what it is so I understand what you mean.

thomassmith65 39 minutes ago | parent [-]

I'd rather not speak too poorly of Anthropic, because - to the extent I can bring myself to like a tech company - I like Anthropic.

That said, the copy uses "we never intentionally degrade our models" to mean something like "we never degrade one facet of our models unless it improves some other facet of our models". This is a cop out, because it is what users suspected and complained about. What users want - regardless of whether it is realistic to expect - is for Anthropic to buy even more compute than Anthropic already does, so that the models remain equally smart even if the service demand increases.