| ▲ | postalcoder 7 hours ago |
| I had used Claude Code max as my daily driver last year and this sort of drama was par for the course. It's why I migrated entirely to Codex, despite liking Claude, the harness, more. There's this honeymoon period with Claude you experience for a month or two followed by a trough of disillusionment, and then a rebound after a model update (rinse and repeat). It doesn't help that Anthropic is experiencing a vicious compute famine atm. |
|
| ▲ | sleepytimetea 7 hours ago | parent | next [-] |
| I like the term "compute famine" - it appears that all AI infrastructure is maxed out globally. |
|
| ▲ | cmaster11 7 hours ago | parent | prev | next [-] |
| I've been using Code for half a year, these past couple weeks have been a totally different experience I'm on max 20, and seeing my weekly quota going bust in ~3 days is a bit absurd when nothing has significantly changed in the way I work |
| |
| ▲ | SkyPuncher 6 hours ago | parent | next [-] | | This is my exact experience as well. It’s further frustrating that I have committed to certain project deadlines knowing that I’d be able to complete it in X amount of time with agent tooling. That agentic tooling is no longer viable and I’m scrambling to readjust expectations and how much I can commit to. | |
| ▲ | hirako2000 7 hours ago | parent | prev | next [-] | | I refuse to use anthropic's models (and openai, gemini) because the math simply doesn't add up. To add the fact we are being taken for fools with dramatic announcements, FOMOs messages. I even suspect some reaction farms are going on to boost post from people boasting Claude models. These don't happen for codex. Nor for mistral. Nor for deepseek. It can't just be that Claude code is so much better. There are open weight models that work perfectly fine for most cases, at a fraction of the cost. Why are more people not talking about those. Manipulation. | | |
| ▲ | throwaway2027 6 hours ago | parent [-] | | Mistral isn't that great. Deepseek was good when they first had thinking. But most people just try something out and if that doesn't work on that model then it's bad and for Claude and Codex and Gemini they just are that much better now, but if they quantize or cut limits they destabilize and you're right you might as well just use something worse but reliable. | | |
| ▲ | hirako2000 6 hours ago | parent [-] | | I regularly compare models. You are right Deepseek was more impressive when the latest came out. But since then they accepted to slow down throughout and keep the same quality. I often compare with Gemini. Sure those Google servers are super fast. But I can't see it better. Qwen and deepseek simply work better for me. Haven't tested Mistral in a while, you may be right. People try out and feel comfortable: using U.S models (I can see the logic), but mostly for brand recognition. Anthropic and OpenAi are the best aren't they? When the models jam they blame themselves. |
|
| |
| ▲ | wellthisisgreat 5 hours ago | parent | prev [-] | | Same exact experience. I never expected to depleted a weekly quota when not working every day of the week. |
|
|
| ▲ | HauntingPin 5 hours ago | parent | prev [-] |
| This past week was a nightmare in trying to get Claude to do any useful work. I've cancelled my subscription and everybody else here having problems should too. I don't think Anthropic cares about anything else. |