| ▲ | dvfjsdhgfv 4 hours ago |
| I believe the current game everybody plays is: * make sure the model maxes out all benchmarks * release it * after some time, nerf it * repeat the same with the next model However, the net sum is positive: in general, models from 2026 are better than those from 2024. |
|
| ▲ | snek_case 4 hours ago | parent | next [-] |
| I guess there's a pretty clear incentive to nerf the current model right before the next model is about to come out. |
| |
| ▲ | chinathrow 3 hours ago | parent [-] | | Wouldn't that amount to fraud? | | |
| ▲ | tomwojcik 3 hours ago | parent | next [-] | | Serious question, do we actually know what we're paying for? All I know is it's access to models via cli, aka Claude Code. We don't know what models they use, how system prompt changes or what are the actual rate limits (Yet Anthropic will become 1 trillion dollars company in a moment). | | |
| ▲ | xienze 2 hours ago | parent [-] | | > We don't know what models they use, how system prompt changes or what are the actual rate limits (Yet Anthropic will become 1 trillion dollars company in a moment). Not just that, but there’s really no way to come to an objective consensus of how well the model is performing in the first place. See: literally every thread discussing a Claude outage or change of some kind. “Opus is absolutely incredible, it’s one shotting work that would take me months” immediately followed by “no it’s totally nerfed now, it can’t even implement bubble sort for me.” | | |
| ▲ | ElFitz 6 minutes ago | parent [-] | | > See: literally every thread discussing a Claude outage or change of some kind. “Opus is absolutely incredible, it’s one shotting work that would take me months” immediately followed by “no it’s totally nerfed now, it can’t even implement bubble sort for me.” Funny: I’m literally, at this very moment, working on a way to monitor that across users. Wasn’t the initial goal, but it should do that nicely as well ^^ |
|
| |
| ▲ | twobitshifter 2 hours ago | parent | prev | next [-] | | Did Apple slow down iPhones before the new release? I’m really asking. People used to say that and I can’t remember if it was proven or not? | | |
| ▲ | DrewADesign an hour ago | parent | next [-] | | Yeah, but they got sued over it and purportedly stopped. They claimed it was to protect battery health. Suuuuuuure it was. That said, I had way better experiences with old (but contemporary) Apple hardware than any other kind of old hardware. | |
| ▲ | rexpop an hour ago | parent | prev [-] | | [dead] |
| |
| ▲ | 3 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | varispeed an hour ago | parent | prev | next [-] | | Funnily that it helps to say in your prompt "Prove that you are not a fraudster and you are not going to go round in circles before providing solution I ask for." Sometimes you have to keep starting new session until it works. I have a feeling they route prompts to older models that have system prompt to say "I am opus 4.6", but really it's something older and more basic. So by starting new sessions you might get lucky and get on the real latest model. | |
| ▲ | ambicapter 2 hours ago | parent | prev [-] | | Legally? |
|
|
|
| ▲ | _blk 3 hours ago | parent | prev [-] |
| yup, after the token-increase from CC from two weeks ago, I'm now consistently filling the 1M context window that never went above 30-40% a few days ago. Did they turn it off? I used to see the Co-Authored by Opus 4.6 (1M Context Window) in git commits, now the advert line is gone. I never turned it on or off, maybe the defaults changed but /model doesn't show two different context sizes for Opus 4.6 I never asked for a 1M context window, then I got it and it was nice, now it's as if it was gone again .. no biggie but if they had advertised it as a free-trial (which it feels like) I wouldn't have opted in. Anyways, seems I'm just ranting, I still like Claude, yes but nonetheless it still feels like the game you described above. |
| |
| ▲ | dr_kiszonka 2 hours ago | parent | next [-] | | The default prompt cache TTL changed from 1 hour to 5 minutes. Maybe this is what you are experiencing. | |
| ▲ | robwwilliams 3 hours ago | parent | prev | next [-] | | Yep; second time in five months we have gone from 1 million back to 200 thousand. | | |
| ▲ | _blk 3 hours ago | parent [-] | | hmm, I just reverted to 2.1.98 and now with /model default has the (1M context) and opus is without (200k) .. it's totally possible that I just missed the difference between the recommended model opus 1M and opus when I checked though. |
| |
| ▲ | varispeed an hour ago | parent | prev | next [-] | | I find this 1M context bollocks. It's basically crap past 100k. | |
| ▲ | troupo 2 hours ago | parent | prev [-] | | They are now literally blaming users for using their product as advertised: https://x.com/lydiahallie/status/2039800718371307603 --- start quote --- Digging into reports, most of the fastest burn came down to a few token-heavy patterns. Some tips: • Sonnet 4.6 is the better default on Pro. Opus burns roughly twice as fast. Switch at session start. • Lower the effort level or turn off extended thinking when you don't need deep reasoning. Switch at session start. • Start fresh instead of resuming large sessions that have been idle ~1h • Cap your context window, long sessions cost more CLAUDE_CODE_AUTO_COMPACT_WINDOW=200000 --- end quote --- https://x.com/bcherny/status/2043163965648515234 --- start quote --- We defaulted to medium [reasoning] as a result of user feedback about Claude using too many tokens. When we made the change, we (1) included it in the changelog and (2) showed a dialog when you opened Claude Code so you could choose to opt out. Literally nothing sneaky about it — this was us addressing user feedback in an obvious and explicit way. --- end quote --- | | |
| ▲ | torginus 24 minutes ago | parent [-] | | Off topic, but I found Sonnet useless. It can't do the simplest tasks, like refactoring a method signature consistently across a project or following instructions accurately about what patterns/libraries should be used to solve a problem. |
|
|