| ▲ | simonw 6 hours ago | |||||||||||||
I buy the theory that Claude Code is engineered to use things like token caching efficiently, and their Claude Max plans were designed with those optimizations in mind. If people start using the Claude Max plans with other agent harnesses that don't use the same kinds of optimizations the economics may no longer have worked out. (But I also buy that they're going for horizontal control of the stack here and banning other agent harnesses was a competitive move to support that.) | ||||||||||||||
| ▲ | mirekrusin 6 hours ago | parent | next [-] | |||||||||||||
It should just burn quota faster then. Instead of blocking they should just mention that if you use other tools then your quota may reduce at 3x speed compared to cc. People would switch. | ||||||||||||||
| ▲ | andai 5 hours ago | parent | prev | next [-] | |||||||||||||
When I last checked a few months ago, Anthropic was the only provider that didn't have automatic prompt caching. You had to do it manually (and you could only set checkpoints a few times per context?), and most 3rd party stuff does not. They seem to have started rejecting 3rd party usage of the sub a few weeks ago, before Claw blew up. By the way, does anyone know about the Agents SDK? Apparently you can use it with an auth token, is anyone doing that? Or is it likely to get your account in trouble as well? | ||||||||||||||
| ▲ | pluralmonad 4 hours ago | parent | prev | next [-] | |||||||||||||
I would be surprised if the primary reason for banning third party clients isn't because they are collecting training data via telemetry and analytics in CC. I know CC needlessly connects to google infrastructure, I assume for analytics. | ||||||||||||||
| ▲ | volkercraig 6 hours ago | parent | prev | next [-] | |||||||||||||
Absolutely. I installed clawdbot for just long enough to send a single message, and it burned through almost a quarter of my session allowance. That was enough for me. Meanwhile I can use CC comfortably for a few hours and I've only hit my token limit a few times. I've had a similar experience with opencode, but I find that works better with my local models anyway. | ||||||||||||||
| ||||||||||||||
| ▲ | ImprobableTruth 6 hours ago | parent | prev [-] | |||||||||||||
If that was the real reason, why wouldn't they just make it so that if you don't correctly use caching you use up more of your limit? | ||||||||||||||