| ▲ | Anthropic says OpenClaw-style Claude CLI usage is allowed again(docs.openclaw.ai) |
| 503 points by jmsflknr 2 days ago | 314 comments |
| |
|
| ▲ | steipete 2 days ago | parent | next [-] |
| Peter here from OpenClaw. For context, here’s why our post reads the way it does: Boris from Claude Code said publicly on Twitter that CLI-style usage is allowed. We took that seriously and invested time building around that guidance. I even changed the defaults, so when using the cli we're automatially disabling features that use excessive tokens like the heartbeat feature. But in practice, Anthropic still blocks parts of our system prompt, so the actual behavior today does not match what was communicated publicly. https://x.com/bcherny/status/2041035127430754686 They since seemed to changed their classifier as people hack around it, as it is trivial to do so with a few renames. I'm not playing that game so it's in a weird limbo where it should work in theory but doesn't in practice. |
| |
| ▲ | TIPSIO 2 days ago | parent | next [-] | | A lot of people have spent a considerable amount of time building out "claude -p" workflows trusting Anthropic because of those same Tweet assurances outside of OpenClaw. It seems with the new "--bare" flag they are introducing, a huge rug pull is coming as they plan to deprecate -p for unlimited users. The docs now read: > "Bare mode skips OAuth and keychain reads. Anthropic authentication must come from ANTHROPIC_API_KEY or an apiKeyHelper in the JSON passed to --settings. Bedrock, Vertex, and Foundry use their usual provider credentials. --bare is the recommended mode for scripted and SDK calls, and will become the default for -p in a future release." Hope I am reading this wrong or this is clarified. https://code.claude.com/docs/en/headless | | |
| ▲ | camkego a day ago | parent [-] | | It seems clear that Anthropic wants users pay API rates for tokens when use in a programatic way, and not subscriber rates for tokens when used from code. As a user, I want to pay the subscription rates with -p, but it seems they want to block that. |
| |
| ▲ | ghm2180 a day ago | parent | prev | next [-] | | I've commented elsewhere about just having simple rate limits tied to oauth tokens. This should not be that hard. There is one simple policy: Subscriptions are for use on human scale of comprehension. API Keys are for everything else. Anthropic can have a machine/bot get rate limited and people can build workflows using `claude -p` or something even better (like an SDK) , all the while using their OAuth tokens for max/pro. | |
| ▲ | MillionOClock 2 days ago | parent | prev | next [-] | | Peter, while we are on the subject of clarifying what is and isn't allowed I have a question: has OpenAI clearly communicated about precisely where one is supposed to be able to use their Codex quota? For instance, as far as I understand, it is allowed to use it with OpenClaw, but does it extend to any other coding harness? Say I have an app (potentially a paid one) and want my users to use their Codex quota in it, is it permitted to do? As you can probably imagine that would unlock a lot of uses cases given smaller actors can't subsidize as much token costs, but unfortunately, and maybe expectedly due to the nature of subscriptions, I have not been able to find any answer regarding this. | | |
| ▲ | extr 2 days ago | parent [-] | | I'm not sure they have "officially" said anything but they do allow Codex OAuth login for 3rd party coding agents: pi, opencode, etc. Employees on twitter have explicitly approved this. | | |
| ▲ | MillionOClock 2 days ago | parent [-] | | That matches what I have seen, but I think I remember reading a tweet that had mentioned those "developing in the open" (not an exact citation, just based on what I remember), which made me wonder if it meant they considered this allowed only for open source software, or if they were intending to be much more permissive, essentially considering users can use their quotas wherever they want, or maybe even completely different rules, again I feel there could be more transparency regarding all of that. |
|
| |
| ▲ | tngranados 2 days ago | parent | prev | next [-] | | Looks like they are trying to correct course now, but they’ve already lost the trust, and with the new lower limits, it’s probably not worth using it in OpenClaw | |
| ▲ | dmohl0 a day ago | parent | prev | next [-] | | Claude CLI has a server mode - am I missing something here, or could we all just claude --server and let openclaw use claude via a2a? | |
| ▲ | conroydave a day ago | parent | prev | next [-] | | thank you for your commitment to open source. | |
| ▲ | extr 2 days ago | parent | prev [-] | | I mean surely you can understand the the difficulty of their position, right? It's as if Waymo offered a subsidized, subscription based plan that models a certain type of ridership as typical but then people start scheduling rides on a timer with no one in it, far outside the original use case of "Get me from point A to point B". And of course the line between what is acceptable is quite fuzzy. You could imagine it being seen as okay to send a rider-less Waymo to pick up groceries occasionally - but not to schedule one every single day at 4:30PM to pick up a single ice cream cone. You can argue that this is unfair and they should provide clearer guidance. Well - as soon as they do people find ways to skirt the letter of the rules to once again take advantage of the economics of the subscription model. So should they just scrap the entire plan? Ruin it for people who are using it as it was intended (coding agent, light experimentation/headless use outside of that)? That doesn't seem right either. | | |
| ▲ | tempaccount420 2 days ago | parent | next [-] | | I don't think anyone would want the type of user that OpenClaw users are as customers... There will be a time for OpenClaw, but in the current world with limited compute, that time is not now. | |
| ▲ | athrowaway3z a day ago | parent | prev [-] | | I think HN needs a regular reminder that most things sold are commodities -without limits or re-use. Coal and wheat have no DRMs. This kind of thing is the exception.
Subsidized subscriptions work to distort the power of the market. The more successful they are (in destroying competition), the worse it leaves consumers. While i get the individual steps that leads them to this "difficult position", I think i'll just keep telling everybody to cancel their sub and make sure to not get locked in. | | |
| ▲ | extr a day ago | parent [-] | | > Most things are sold as commodities without limits or re-use. This is somehow doubly wrong. Not only are most economic goods NOT commodities, there are plenty of economic analogs to AI subscriptions (streaming, telecom, gyms, buffets) and none of them operate as "unlimited with no restrictions on re-use". Really just terribly misinformed way of thinking here. |
|
|
|
|
| ▲ | joshstrange 2 days ago | parent | prev | next [-] |
| Well that's clear as mud. I've complained, extensively, about this before but Anthropic really needs to make it clear what is and is not supported with or without a subscription. Until then, it's hard to know where you stand with using their products. I say all of this as someone who doesn't use OpenClaw or any Claw-like product currently. I just want to know what I can and can't do and currently it's impossible to know. |
| |
| ▲ | GorbachevyChase 2 days ago | parent | next [-] | | Anthropic changing what you get for your subscription week to week is why I would never spend beyond a hobby-tier license. Great product. Probably. But maybe depending on what hours of the day you use it. If it suits them. I can’t tell you how relieved I am that there are many capable open weight models in the wild to keep a ceiling on bad behavior. | | | |
| ▲ | LatencyKills 2 days ago | parent | prev | next [-] | | The poor communication and flip-flopping are what concern me. How can I buy into an ecosystem that might disallow one of my main workflows? I currently use several hook scripts to route specific work to different models. Will they disallow that at some point? We don't know because they can't get their story straight. | | |
| ▲ | dnautics 2 days ago | parent | next [-] | | Keep in mind this is hearsay, since we are reading something through a non-official channel, it's maybe not right to call it "flip-flopping"? | | |
| ▲ | LatencyKills 2 days ago | parent [-] | | Given the lack of clear communication and the fact that their primary competitor openly supports the use of bespoke harnesses, I highly doubt this is an incorrect announcement. Anthropic is destroying goodwill that is hard-won in this space. At the end of the day, people just need to do their work in a way that makes sense for them. In my case (someone who has been building ML/AI tools for 25 years @ MS & Apple), I have much better results using my bespoke harness. If I'm paying $200/month for compute, I should be able to use it in a way that works for me. Given the push back, I'm not alone. | | |
| ▲ | dnautics 2 days ago | parent [-] | | Nobody said anything about the correctness of the announcement. | | |
| ▲ | LatencyKills 2 days ago | parent [-] | | > Keep in mind this is hearsay How, exactly, is that not saying something about the announcement? | | |
| ▲ | dnautics 2 days ago | parent [-] | | It's saying something about the announcement, it's not saying something about the correctness of the announcement. I used the word hearsay to imply that flip-flopping should only be a judgement on the comms of the entity accused of flip-flopping, not information living on some third party source. | | |
| ▲ | LatencyKills 2 days ago | parent [-] | | And I was referring to the all of the historical flip-flopping. This new flipping is just proving the point. Of course, you're simply being pedantic. Everyone knows why they are making this change (which is more important than your silly take on what constitutes flip-flopping). The point: Anthropic is losing subscribers because it has no idea what it actually wants to be. |
|
|
|
|
| |
| ▲ | 2 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | chrisjj a day ago | parent | prev [-] | | > The poor communication and flip-flopping are what concern me. There's no poor communication. The communication is excellently crafted to deceive users and facilitate flipflopping. |
| |
| ▲ | giancarlostoro 2 days ago | parent | prev | next [-] | | They really need to figure out the rules, look I'd love to use a custom harness with Claude Code that I can extend, or build my own (which I'm doing) and use it with my Claude Code license, I don't want to overspend on tokens if I can help it. They really need to set their bar for the next model releases to use less tokens, or to trim their own cost for how these models are run. I'd be okay with a slightly slower experience with Claude Code if it meant similar throughput, but less cost, especially if I can build my own harness for it. | | |
| ▲ | gck1 15 minutes ago | parent | next [-] | | You can't really 'figure out' the rule that imposes restrictions on a client side interface of your models. As long as cli exists, third party software will always be able to interact with it, whether Anthropic likes it or not. This is why it's so inconsistent and confusing. They simply can't come up with a rule that only affects OpenClaw/pi/etc and not 'allowed' automations. You either permit automation, or you don't. Right now, they want to have it both ways. | |
| ▲ | mort96 2 days ago | parent | prev | next [-] | | They might've just hit a ceiling with the quality they can get per token? Maybe the only real way left to scale quality is to increase token usage? | |
| ▲ | ctxc 2 days ago | parent | prev | next [-] | | Out of curiosity, what kind of custom harness are you thinking? | | | |
| ▲ | chrisjj 2 days ago | parent | prev [-] | | > They really need to figure out the rules A company built groundup on rule-breaking? Ain't gonna happen. |
| |
| ▲ | thefounder 2 days ago | parent | prev | next [-] | | I don’t get why people are so surprised. Didn’t they learn anything from Twitter APIs and the like. The APIs are open as long as they serve the short term problem then Anthropic builds the features people actually use (more or less) and ban the usage of APIs for competing clients | | |
| ▲ | dpoloncsak 2 days ago | parent [-] | | This is the exact opposite of what you are describing, where Anthropic locked down the API until they released their competitor, then re-allowed API | | |
| ▲ | thefounder 17 hours ago | parent [-] | | Not really. They realised they acted too soon. Give them some time until the market “consolidates” and they will change again their policy. Why would they want someone else to develop competing clients? |
|
| |
| ▲ | biophysboy 2 days ago | parent | prev | next [-] | | I think a good corollary idea to "vibe coding" is the "vibe product". There is so much stuff popping in and out of existence and my excitement has declined. | |
| ▲ | nopointttt a day ago | parent | prev | next [-] | | There's a technical reason the stance is so vague. Claude CLI works if you reuse its session token, works behind ANTHROPIC_BASE_URL, works wrapped in a shell script. Anthropic sees the same telemetry either way. To draw a firm line they'd have to cap what the CLI does, or ship a policy file the tooling can actually check, and both are a real investment. I read the current fog as them being honest about that rather than being evasive. It's still annoying. | |
| ▲ | jstummbillig 2 days ago | parent | prev | next [-] | | I have no trouble believing that all labs are trying really hard to come up with an enticing bundle of something works for a wide variety of users, but it's hard to anticipate the popularity of something like OpenClaw, which completely blows through all previous usage patterns at population level. It seems like a tall order to set lasting rules in this space at this point, where nobody really understands what is going to happen in a few weeks. | | |
| ▲ | cruffle_duffle 19 hours ago | parent [-] | | I feel like this is basically the answer. Things are constantly changing and it’s hard to predict what things have staying power and what is just a blip on the evolutionary railroad. All this fuzzyness from Anthropic reads more like an incredibly fast growing company working in a brand new space full of uncharted waters. In other words, they are making shit up not because they suck but because that is literally all one can do. |
| |
| ▲ | devanshranjan 2 days ago | parent | prev | next [-] | | Same building on their API. You design around what you think is allowed, then a blog post shifts everything. A proper developer policy page would fix this. | | |
| ▲ | chrisjj a day ago | parent | next [-] | | > A proper developer policy page would fix this. And that's why don't get one. | |
| ▲ | TeMPOraL 2 days ago | parent | prev [-] | | Stealibg OAuth keys from first party app to impersonate it in order to not have to pay for usage with properly generated API key was never part of normal use anywhere. | | |
| ▲ | bradynapier 2 days ago | parent | next [-] | | Yeah, the main point here is they had a CLI specifically that allowed you to call Claude, and that was being used. The CLI giving you access should kind of indicate that you should be able to use it as it is defined in the help. I do agree, though, that the parts of this that were actually using the Claude system to generate OAuth keys themselves are a little sus. That makes sense to say “must use Claude harness to login before calling Claude cli or using Claude code sdk” | |
| ▲ | dnautics 2 days ago | parent | prev [-] | | You're not stealing oauth keys, they are your keys?? |
|
| |
| ▲ | 2 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | chrisjj 2 days ago | parent | prev | next [-] | | > Until then, it's hard to know where you stand with using their products. Working As Designed, clearly. | |
| ▲ | tehjoker 2 days ago | parent | prev [-] | | They probably decreased the cost and limited these external calls |
|
|
| ▲ | Alifatisk 2 days ago | parent | prev | next [-] |
| > Anthropic staff told us OpenClaw-style Claude CLI usage is allowed again Anthropic staff have had contradictive statements in Twitter and have corrected each other. Their intent for clarifications lead to confusion. > OpenClaw treats Claude CLI reuse and claude -p usage as sanctioned for this integration unless Anthropic publishes a new policy. Oh cool, so everything is back to business now, until they all or sudden update their policy tomorrow that retracts everything. Anthropic have proved themselves to be be unreliable when it comes to CC. Switching to other providers is the best way to go, if you want to keep your insanity. |
| |
| ▲ | ffsm8 2 days ago | parent | next [-] | | > Switching to other providers is the best way to go, if you want to keep your insanity. Best and most applicable typo ever ʕ ´ • ᴥ •̥ ` ʔ | |
| ▲ | operatingthetan 2 days ago | parent | prev | next [-] | | This is such a strange way for this to be announced. Why is openclaw telling us this? I wouldn't even trust it until Anthropic says so themselves. | | |
| ▲ | bandrami 2 days ago | parent | next [-] | | It's the PayPal model of customer service: they'll ban you at any time for any reason or none at all, but if you're very nice they might be willing to have a human look at that decision at some point, but probably not. | | |
| ▲ | LtWorf 2 days ago | parent | next [-] | | Oh yeah that happened to my paypal the one time I had a user donate to me! At least the only action I was still able to perform was to refund the user, or paypal would have just kept the money. | |
| ▲ | SV_BubbleTime 2 days ago | parent | prev [-] | | Straight up PayPal and Venmo can go to hell. They banned my account, I painted a gun for a guy with a cerakote bale-on setup. He paid with PayPal and they banned me. His gun, just a legal service to paint it. I pulled our company portal away from PayPal when they refused to restore my account. Five years later, I tried to re-activate and the human I finally got to effectively told me to fuck off; so I will spend the rest of my life bashing that trash company at every chance. |
| |
| ▲ | stingraycharles 2 days ago | parent | prev | next [-] | | They had this on here since day 1 of the block. This is just Openclaw saying "if you run Openclaw inside Claude Code, it's compliant with the Anthropic ToS", because, well, it's literally running inside Claude Code. What's not allowed is grabbing the oauth tokens and using these for your own custom agent, which is what was (and still is) banned. Nothing has changed, this appears to just be a giant misunderstanding (and probably a poor choice of words from Openclaw). | | |
| ▲ | buzer 2 days ago | parent | next [-] | | But there was a period of time when using Openclaw via Claude Code (via -p) was not allowed and it even gave an error message in that case. That's why people find the constantly changing messaging confusing. https://x.com/steipete/status/2040811558427648357 | |
| ▲ | ikidd 2 days ago | parent | prev [-] | | That's how I took that; if I installed CC and worked in CC to modify OC, all good. If I ran the openclaw gateway itself with a CC oauth key, my bad. |
| |
| ▲ | eloisant 2 days ago | parent | prev | next [-] | | That's the thing, it's not announced at all. The title is wrong. It's just OpenClaw people claiming "Anthropic told us it's fine". | | | |
| ▲ | deaux 2 days ago | parent | prev [-] | | Strategic ambiguity. |
| |
| ▲ | jeremyjh 2 days ago | parent | prev | next [-] | | The most recent Anthropic announcement was not that people would be banned for using subscriptions with OpenClaw, but that it would be charged as extra usage. I think the reason this was changed three days after that announcement is that being charged for extra usage meant people would not be banned for using their subscription OAuth tokens directly against the Anthropic API with a third party harness, as they had been before. But rather both that usage, and the more recent claude -p usage both be charged as extra usage. I don't see anything on this page that claims something different from that, or that addresses that claim at all. | |
| ▲ | azmz 3 hours ago | parent | prev | next [-] | | Been building atmita.com for this. Every automation has its own model dropdown, so if a provider changes terms you swap that one run to another without rewriting. Claude and GPT side by side right now, more coming. | |
| ▲ | giancarlostoro 2 days ago | parent | prev | next [-] | | They need to stop announcing this on freaking Twitter and make a formal announcement on their blog, this is ridiculous and unprofessional. Make a policy decision and stick to it, and dictate how alternative harnesses should work. | |
| ▲ | troupo 2 days ago | parent | prev | next [-] | | > until they all or sudden update their policy tomorrow that retracts everything. Oh no. They won't update the policy. Boris or Thariq will casually mention in a random off-hand commebt on Twitter that this is banned now, and then will gaslight everyone that this has always been the case. | |
| ▲ | JumpCrisscross 2 days ago | parent | prev | next [-] | | > Switching to other providers is the best way to go, if you want to keep your insanity I remember when I’d periodically rage quit from Uber One to Lyft Pink and back again every time I had a terrible customer-service experience. In the end, I realized picking a demon and getting familiar with its quirks was the better way to go. I’m currently sticking with Claude, in part because I’m not exposed to this nonsense due to OpenClaw, in larger part because of the Hegseth-Altman DoD nonsense. More broadly, however, I’m not sure if any of Google, Anthropic or OpenAI are coming across as stars in AI communication and customer service. | | |
| ▲ | Aurornis 2 days ago | parent | next [-] | | With Uber and Lyft or even Anthropic versus OpenAI versus <insert flavor of the month here> I don’t even try to attach myself to any one brand. It’s so easy to switch between all of them. I can open the Uber and Lyft apps and compare in a minute. I can run Claude and ChatGPT in parallel and see which one gets a better handle on the question. I can switch LLM providers with a few minutes of signing up for one and cancelling the other. They all try to encourage brand lock in but it’s easy to pick up and move if you’re using them for their main service. | |
| ▲ | 306bobby 2 days ago | parent | prev [-] | | This isn't really a fair comparison imo. There hasnt been near the confusion and drama surrounding things like codex and gemini-cli. I don't think they're all on the same pedestal right now |
| |
| ▲ | cyanydeez 2 days ago | parent | prev [-] | | you know how s bunch of IT people are trying to "escape the permanent underclass" well it seems like anyone building their tools on cloud providers is doing the opposite. theyre willingly bexoming the underclass in hopes it trickles down |
|
|
| ▲ | 1una 2 days ago | parent | prev | next [-] |
| Looks like this was restored 2 weeks ago[0], 3 days after Anthropic said OpenClaw requires extra usage[1]. At this point, it's hard to take this seriously. No official statement and not even a tweet? [0]: https://github.com/openclaw/openclaw/commit/d378a504ac17eab2... [1]: https://news.ycombinator.com/item?id=47633396 |
| |
| ▲ | stingraycharles 2 days ago | parent | next [-] | | No, it's just that it's confusing, because there are two ways of using Claude Code credentials: 1. Take the oauth credentials and roll your own agent -- this is NOT allowed 2. Run your agentic application directly in Claude Code -- this IS allowed When OpenClaw says "Open-Claw style CLI usage", it means literally running OpenClaw in an official Claude Code session. Anthropic has no problems with this, this is compliant with their ToS. When you use Claude Code's oauth credentials outside of the claude code cli Anthropic will charge you extra usage (API pricing) within your existing subscription. | | |
| ▲ | filleokus 2 days ago | parent | next [-] | | But... Even when running it in mode 2 ("claude -p") they at certain points tried to detect OpenClaw-usage based prompts made, and blocked them [0]. Now OpenClaw says that Antrophic sanctions this as allowable again. I agree with GP that this is hard to take seriously. [0]: https://x.com/steipete/status/2040811558427648357 | | |
| ▲ | Wowfunhappy 2 days ago | parent | next [-] | | > Even when running it in mode 2 ("claude -p") they at certain points tried to detect OpenClaw-usage based prompts made, and blocked them [0]. But then the Claude Code product manager said: > This is not intentional, likely an overactive abuse classifier. Looking, and working on clarifying the policy going forward. https://xcancel.com/bcherny/status/2041035127430754686#m | |
| ▲ | zaphar 2 days ago | parent | prev | next [-] | | I mean, if you are them and trying to detect when people are using your system incorrectly the detection system is going to be a little bit flaky. How do they prove you aren't violating your ToS by using OAuth for a system they didn't approve that usage for? The fault here is not with Anthropic. It lies with cowboy coders creating a system that violates a providers terms of service and creating an adverse relationship. | |
| ▲ | stingraycharles 2 days ago | parent | prev [-] | | I have never heard of this, and cannot be reproduced, and is not according to Anthropic's ToS. And there's a lot of FUD being spread around. They don't ban Openclaw prompts, each custom LLM application provides a client application id (this is how e.g. Openrouter can tell you how popular Openclaw is, and which models are used the most). Anthropic just checks for that. | | |
| ▲ | filleokus 2 days ago | parent | next [-] | | Either me or you are misunderstanding the situation. A comment from the GP link: https://news.ycombinator.com/item?id=47633867 > This is slightly different from what OpenCode was banned from doing; they were a separate harness grabbing a user’s Claude Code session and pretending to be Claude Code. > OpenClaw was still using Claude Code as the harness (via claude -p)[0]. I understand why Anthropic is doing this (and they’ve made it clear that building products around claude -p is disallowed) but I fear Conductor will be next. | | |
| ▲ | stingraycharles 2 days ago | parent [-] | | If Openclaw was still using Claude Code as the harness, I don't know how to reconcile that with "Openclaw is based on the pi framework", which is decidedly NOT claude code. From what I understand, they still had the Claude Code harness available, but were mostly fully integrated on the pi agent framework, using Claude Code's oauth credentials directly, | | |
| ▲ | piazz 2 days ago | parent [-] | | Openclaw allows you to effectively “shell out” to another harness for your model calls, while still using Pi as your main agentic harness. This is the claude -p workflow. Tools and skills are injected into Claude and they hack session persistence into it as well. They also absolutely blocked OpenClaw system prompts from this path in the prior weeks, based purely on keyword detection. Seems they’ve undone that now. |
|
| |
| ▲ | throwpoaster 2 days ago | parent | prev | next [-] | | No, if you ran Openclaw using Anthropic API as a provider, or had it use the ‘claude -p’ cli interface, you got an email from Anthropic threatening a ban unless you upgraded billing. This was widely reported, and happened to me. You probably can’t reproduce it or see it in docs because they seem to have changed the policy. | |
| ▲ | holoduke 2 days ago | parent | prev [-] | | Claude -p is using Claude cli. How can you know it's my own claw? |
|
| |
| ▲ | ElFitz 2 days ago | parent | prev [-] | | And yet running the Claude Code cli with `-p` in ephemeral VMs gets me the "Third-party apps now draw from extra usage, not plan limits. We've added a credit to your organization to get you started. Ask your workspace admin to claim it and keep going." error. One day you're experimenting just fine. The next, everything breaks. And I'd gladly use their web containerized agents instead (it would pretty much be the same thing), but we happen to do Apple stuff. So unless we want to dive into relying on ever-changing unreliable toolchains that break every time Apple farts, we're stuck with macOS. |
| |
| ▲ | jeremyjh 2 days ago | parent | prev | next [-] | | I think this is consistent with the Anthropic announcement. I do not see anything on this page that says it will NOT be charged as extra usage. The most recent Anthropic announcement was not that people would be banned for using subscriptions with OpenClaw, but that it would be charged as extra usage. I think the reason this was changed three days after that announcement is that being charged for extra usage meant people would not be banned for using their subscription OAuth tokens directly against the Anthropic API with a third party harness, as they had been before. But rather both that usage, and the more recent claude -p usage both be charged as extra usage. | |
| ▲ | ethbr1 2 days ago | parent | prev | next [-] | | > No official statement and not even a tweet? Release notes and announcements are a well-known agentic anti-pattern. If you're doing them, you're doing agentic wrong. /s-ish-also-cry | |
| ▲ | WhereIsTheTruth 2 days ago | parent | prev [-] | | This is called FUD, amplify negativity, silence positivity | | |
| ▲ | flagos10 2 days ago | parent | next [-] | | It's also something super simple to clarify from Anthropic if they want. | | |
| ▲ | scottyah 2 days ago | parent [-] | | They have, many times. We're seeing a chain where people are pointing to openclaw's github for information (a tool that was effectively acquired by their #1 competitor) and trying to make it sound so crazy. The actual flow was simple- They said "don't use your claude code membership on tools that burn lots of the subsidized tokens". Then a bunch of people raised a fit (because openclaw is almost useless without claude models), so Anthropic basically said "That's what the API keys are for". Antropic has the info on their website, emailed all users for each step, and I've seen it on X- I'm sure it's in other places as well. |
| |
| ▲ | arcanemachiner 2 days ago | parent | prev | next [-] | | Considering Anthropic is constantly doing the opposite, I would just call it "balance". | | |
| ▲ | embedding-shape 2 days ago | parent [-] | | Not that I'm some paragon when it comes to critical thinking exactly, but if there any sort of proof or evidence of Anthropic "silencing negativity"? Wouldn't surprise me, but also haven't seen anything conclusive about it either, so spreading that they are as fact, is ironically FUD itself. | | |
| ▲ | Forgeties79 2 days ago | parent | next [-] | | Name a startup that isn’t trying to downplay, scrub, or otherwise silence negative press. | | | |
| ▲ | root_axis 2 days ago | parent | prev [-] | | When they say "doing the opposite" they are referring to Anthropic's hyperbolic marketing strategy. Though, I don't think that justifies spreading FUD in the opposite direction. I also don't think the comment the GP was replying to contains FUD. |
|
| |
| ▲ | Forgeties79 2 days ago | parent | prev | next [-] | | ^every comment when someone says something remotely negative about LLM’s and their less useful cousins, cryptocurrencies. It’s baffling how similar the language and attitude is sometimes. Anthropic was, even to me, “one of the better ones” until recently. They have made many questionable/poor decisions the last 6-8 weeks and people are right to call them out for it, especially when they want our money. | | |
| ▲ | signatoremo 2 days ago | parent | next [-] | | There are bad products and ones that are never used, just to paraphrase. Every single decision of any business gets derided by some segments of its users. You are free to call out Anthropic for anything you are unhappy about, and you are free to switch vendor, but calling them “good” or “bad” just shows your emotional immaturity, or bias. | | |
| ▲ | Forgeties79 2 days ago | parent [-] | | Everyone is biased. You’re biased. It doesn’t invalidate your opinion, just like it doesn’t invalidate mine. There is no “objective truth” about the moral value or utility of Anthropic. “You’re biased” in these contexts is often just a weak argument bordering on a personal attack. You’re attacking my credibility with no basis for it rather than arguing in favor of anthropic. |
| |
| ▲ | ToucanLoucan 2 days ago | parent | prev [-] | | What's funny is I had personally settled on Anthropic as... the best of a bad situation, I guess? I found the tech useful even if I still deeply hate the industry and hype machine around it. Now though I can't get through a full discussion with Claude before the usage restrictions kick in, which has done a far better job getting me to kick the habit than anything else. I still VERY occasionally use it (as I'm friggin able to anyway) but it's definitely nowhere near my usage previously. And I refuse to give them money, and besideswhich have no goddamn notion of whether it would even be worth it on the lowest paid tier. Ah well. The free ride was fun but I knew it had a shelf life. | | |
| ▲ | gavinray 2 days ago | parent [-] | | I pay the $20 sub for all of the Frontier models and just hop between them as performance changes I will say that Codex high/x-high has consistently performed the best for me, but YMMV | | |
| ▲ | ToucanLoucan 2 days ago | parent [-] | | See the thing is their storefront is so fucking vague. Right now I hit usage limits after about 4-6 messages during the day, depending on length. They say the low tier is 5x usage, so does that mean I can send 20-30 messages? Because that's not remotely worth $20 a month to me. | | |
| ▲ | wafflemaker 2 days ago | parent [-] | | It used to be. But I consistently got to use more than that. Funny thing (or I just imagined that), when I used ChatGPT for studying, it was quite generous about over usage.
When I was just messing around, testing where the guardraila are or trying to get it to generate sexual prose about my siblings to send it to them for laughs, the limits were held much more strictly. I remember when it went up from 25/3h to 50/3h. And I was like meh, because I've already used it over that limit multiple times. |
|
|
|
| |
| ▲ | redsocksfan45 2 days ago | parent | prev [-] | | [dead] |
|
|
|
| ▲ | victorbjorklund 2 days ago | parent | prev | next [-] |
| Anthropic is really trying to burn all that goodwill they worked up by raising prices, reducing limits and making it impossible to know what the actual policies are. |
| |
| ▲ | notarobot123 2 days ago | parent | next [-] | | Boiling the frog is an art form. You've got to know when to turn up the heat and when to let it simmer. | | |
| ▲ | Gigachad 2 days ago | parent | next [-] | | Don’t know, I feel like I’ve watched every tech company get through every controversy without consequence. Google when they merged YouTube and Google+, Reddit multiple times, Facebook after countless scandals. Microsoft destroying windows and pushing ads. At the end of the day a solid product and company can withstand online controversy. | | |
| ▲ | conductr 2 days ago | parent | next [-] | | > a solid product and company can withstand online controversy A product with a massive moat. Switching from Claude to another competitor is insanely easy and without much loss of quality. Until they’ve built their moat, burning goodwill is foolish. What’s different is it’s probably required due to the cash that’s being burnt to operate. They can’t afford to keep offering so much for so little revenue. | |
| ▲ | nativeit 2 days ago | parent | prev [-] | | At the end of the day, unenforced anti-competition regulations can bulldoze controversies. |
| |
| ▲ | sitkack 2 days ago | parent | prev [-] | | Hormussy started it. |
| |
| ▲ | bandrami 2 days ago | parent | prev | next [-] | | If you want LLMs to continue to be offered we have to get to a point where the providers are taking in more money than they are spending hosting them. And we still aren't there (or even close). | | |
| ▲ | hobom 2 days ago | parent | next [-] | | They are taking in more than they are spending hosting them. However, the cost for training the next generation of models is not covered. | | |
| ▲ | bandrami 2 days ago | parent [-] | | Nope. They're losing money on straight inference (you may be thinking of the interview where Dario described a hypothetical company that was positive margin). The only way they can make it look like they're making money on inference is by calling the ongoing reinforcement training of the currently-served model a capital rather than operational expense, which is both absurd and will absolutely not work for an IPO. | | |
| ▲ | wild_egg 2 days ago | parent | next [-] | | Inference, in and of itself, can't be completely unprofitable. Unless you're purely talking about Anthropic? But > If you want LLMs to continue to be offered we have to get to a point where the providers are taking in more money than they are spending hosting them Suggests you just mean in general, as a category, every provider is taking a loss. That seems implausible. Every provider on OpenRouter is giving away inference at a loss? For what purpose? | | |
| ▲ | bandrami a day ago | parent [-] | | For the same reason that Amazon operated at a loss for two decades and Uber operated at a loss for a decade and a half. The problem is the free money hose isn't running anymore. |
| |
| ▲ | victorbjorklund 12 hours ago | parent | prev | next [-] | | I really doubt that since prices are even higher than no-name hosts on open router etc charges. | |
| ▲ | dgellow 2 days ago | parent | prev [-] | | Do you have sources? I would be interested to read them | | |
| ▲ | bandrami 2 days ago | parent [-] | | Probably the best roundup is Ed Zitron at https://wheresyoured.at Half the articles are paywalled but the free ones outline the financial situation of the SOTA providers and he has receipts |
|
|
| |
| ▲ | quikoa 2 days ago | parent | prev | next [-] | | The open models may not be as great but maybe these are good enough. AI users can switch when the prices rise before it becomes sustainable for (some) of the large LLM providers. | | |
| ▲ | Gigachad 2 days ago | parent [-] | | Currently it costs so much more to host an open model than it costs to subscribe to a much better hosted model. Which suggests it’s being massively subsidised still. | | |
| ▲ | finaard 2 days ago | parent | next [-] | | For a lot of tasks smaller models work fine, though. Nowadays the problem is less model quality/speed, but more that it's a bit annoying to mix it in one workflow, with easy switching. I'm currently making an effort to switch to local for stuff that can be local - initially stand alone tasks, longer term a nice harness for mixing. One example would be OCR/image description - I have hooks from dired to throw an image to local translategemma 27b which extracts the text, translates it to english, as necessary, adds a picture description, and - if it feels like - extra context. Works perfectly fine on my macbook. Another example would be generating documentation - local qwen3 coder with a 256k context window does a great job at going through a codebase to check what is and isn't documented, and prepare a draft. I still replace pretty much all of the text - but it's good at collecting the technical details. | | |
| ▲ | pbronez 2 days ago | parent [-] | | I haven’t tried it yet, but Rapid MLX has a neat feature for automatic model switching. It runs a local model using Apple’s MLX framework, then “falls forward” to the cloud dynamically based on usage patterns: > Smart Cloud Routing
>
> Large-context requests auto-route to a cloud LLM (GPT-5, Claude, etc.) when local prefill would be slow. Routing based on new tokens after cache hit. --cloud-model openai/gpt-5 --cloud-threshold 20000 https://github.com/raullenchai/Rapid-MLX |
| |
| ▲ | stingraycharles 2 days ago | parent | prev | next [-] | | You can use open models through OpenRouter, but if you want good open models they’re actually pretty expensive fairly quickly as well. | | |
| ▲ | layoric 2 days ago | parent [-] | | I've found MiniMax 2.7 pretty decent and even pay-as-you-go on OpenRouter, it's $0.30/mt in, and $1.20/mt out you can get some pretty heavy usage for between $5-$10. Their token subscription is heavily subsidized, but even if it goes up or away, its pretty decent. I'm pretty hopeful for these openweight models to become affordable at good enough performance. | | |
| ▲ | stingraycharles 2 days ago | parent [-] | | It’s okay, but if you compare it to eg Sonnet it’s just way too far off the mark all the time that I cannot use it. |
|
| |
| ▲ | ericd 2 days ago | parent | prev | next [-] | | Efficiency goes way up with concurrent requests, so not necessarily subsidy, could just be economy of scale. | |
| ▲ | JumpCrisscross 2 days ago | parent | prev [-] | | If I drop $10k on a souped-up Mac Studio, can that run a competent open-source model for OpenClaw? | | |
| ▲ | Atotalnoob 2 days ago | parent | next [-] | | Qwen is probably your best bet… Edit: I’d also consider waiting for WWDC, they are supposed to be launching the new Mac Studio, an even if you don’t get it, you might be able to snag older models for cheaper | | |
| ▲ | JumpCrisscross 2 days ago | parent | next [-] | | > consider waiting for WWDC 100% agree. I’m just looking forward to setting something up in my electronic closet that I can remote to instead of having everything tracked. | |
| ▲ | storus 2 days ago | parent | prev [-] | | Latest rumors are no Mac Studio until at least October. |
| |
| ▲ | pbronez 2 days ago | parent | prev [-] | | Rapid MLX team has done some interesting benchmarking that suggests Qwopus 27B is pretty solid. Their tool includes benchmarking features so you can evaluate your own setup. They have a metric called Model-Harness Index: MHI = 0.50 × ToolCalling + 0.30 × HumanEval + 0.20 × MMLU (scale 0-100) https://github.com/raullenchai/Rapid-MLX | | |
| ▲ | JumpCrisscross 2 days ago | parent [-] | | Pardon the silly question, but why do I need this tool versus running the model directly (and SSH’ing in when I’m away from home)? |
|
|
|
| |
| ▲ | Larrikin 2 days ago | parent | prev | next [-] | | It is nobody's responsibility to ensure billion dollar companies are profitable. Use them until local models are good enough | |
| ▲ | lynx97 2 days ago | parent | prev | next [-] | | I see the current situation as a plus. I get SOTA models for dumping prices. And once the public providers go up with their pricing, I will be able to switch to local AI because open models have improved so much. | |
| ▲ | baruch 2 days ago | parent | prev | next [-] | | If they started doing caching properly and using proper sunrooms for that they'd have a better chance with that | | | |
| ▲ | carefree-bob 2 days ago | parent | prev | next [-] | | I think this has to be done with technological advances that makes things cheaper, not charging more. I understand why they have to charge more, but not many are gonna be able to afford even $100 a month, and that doesn't seem to be sufficient. It has to come with some combination of better algorithms or better hardware. | | |
| ▲ | bandrami 2 days ago | parent | next [-] | | Making it more affordable would be very bad news for Amazon, who are now counting on $100B in new spending from OpenAI over the next 10 years. | | |
| ▲ | throwthrowuknow 2 days ago | parent | next [-] | | Somethings not adding up. Why is Amazon making financial plans for the next decade based on continued OpenAI spending but you’re saying AI providers like OpenAI and Anthropic aren’t even close to being profitable, so how can they last a decade or more? Who’s wrong? | | |
| ▲ | bandrami 2 days ago | parent | next [-] | | I take it you don't remember 2008 | | |
| ▲ | arcanemachiner 2 days ago | parent [-] | | Are we before or after the part where they start throwing money out of helicopters? | | |
| ▲ | bandrami 2 days ago | parent [-] | | That's the interesting question, right? Because if this unwinds during a period of external inflation (say, because of a big war and energy shortage) then even the Bernanke would say helicopter money won't work |
|
| |
| ▲ | nimchimpsky 2 days ago | parent | prev [-] | | [dead] |
| |
| ▲ | philipwhiuk 2 days ago | parent | prev [-] | | Someone's going to get burned here that's for sure. This isn't going to end with every person on the planet paying $100 a month for an LLM. | | |
| ▲ | LtWorf 2 days ago | parent [-] | | A guy from Meta interviewing at BBC a few years ago claimed that every school child in India was going to have the metaverse VR or they'd be left behind in their education, so every family was certainly going to pony up the money. |
|
| |
| ▲ | Gigachad 2 days ago | parent | prev [-] | | They probably aren’t planning on making the money on consumer subscriptions. Any price is viable as long as the user can get more value out of it than they spend. | | |
| ▲ | bandrami 2 days ago | parent [-] | | "Sell this for less than it cost us" was a viable business plan during the ZIRP era but is not now |
|
| |
| ▲ | vegnus 2 days ago | parent | prev | next [-] | | I'll take local models over these corporate ones any day of the week. Hopefully it's only a matter of time | |
| ▲ | holoduke 2 days ago | parent | prev | next [-] | | Like with all new products. It takes time to let the market do its work. See if from a positive side. The demand for more and faster and bigger hardware is finally back after 15 years of dormancy. Finally we can see 128gb default memory or 64gb videocards in 2 years from now. | |
| ▲ | nimchimpsky 2 days ago | parent | prev [-] | | [dead] |
| |
| ▲ | baq 2 days ago | parent | prev | next [-] | | Would you please think of the shareholders | | |
| ▲ | sofixa 2 days ago | parent [-] | | What shareholders, Anthropic is a money burning pit. Not to the same extent as OpenAI, but both will struggle hard to actually turn a profit some day, let alone make back the massive investments they've received. Not that they don't bring value, I'm just not convinced they'll be able to sell their products in a sticky enough way to make up the prices they'll have to extract to make up for the absurd costs. | | |
| ▲ | bruce511 2 days ago | parent | next [-] | | >> both will struggle hard to actually turn a profit some day, let alone make back the massive investments they've received. I'd agree with you, except I've heard this argument before. Amazon, Google, Facebook all burned lots of cash, and folks were convinced they would fail. On the other hand plenty burned cash and did fail. So could go either way. I expect, once the market consolidates to 2 big engines, they'll make bonkers money. There will be winners and losers. But I can't tell you which is which yet. | | |
| ▲ | throwthrowuknow 2 days ago | parent [-] | | I’m not sure there will be consolidation. There’s too much room for specialization and even when the models are trained to do the same task they have very different qualities and their own strengths and weaknesses. You can’t just swap one for the other. If anything, as hardware improves I’d expect even more models and providers to become available. There’s already an ocean of fine tuned and merged models. |
| |
| ▲ | baq 2 days ago | parent | prev [-] | | $20B ARR or so reported added in Q1 doesn’t sound particularly bad, they’ll raise effective prices some more while Claude diffuses into the economy, sounds like a money printer. The issue is they’re compute constrained on the supply side to grow faster… | | |
| ▲ | sofixa 2 days ago | parent [-] | | > $20B ARR or so reported added in Q1 doesn’t sound particularly bad Unless you compare with the reported cash burn or projected losses. > they’ll raise effective prices some more while Claude diffuses into the economy, sounds like a money printer But the problem is, they have no moat. Even if Claude diffuses into the economy (still to be seen how much it can effectively penetrate sectors other than engineering, spam, marketing/communications), there is no moat, all providers are interchangeable. If Antrhopic raise the prices too much, switch out to the OpenAI equivalent products. | | |
| ▲ | baq 2 days ago | parent [-] | | > But the problem is, they have no moat I disagree very strongly with this, both anecdotally and in the data - subscriptions are growing in all frontier providers; anecdata is right here in HN when you look around almost everyone is talking about CC, codex is a distant second, and completely anecdotally I personally strictly prefer GPT 5.3+ models for backend work and Opus for frontend; Gemini reviews everything that touches concurrency or SQL and finds issues the other models miss. My general opinion is that models cannot be replaceable, because a model which can replace every other provider must excel at everything all specialist models excel at and that is impossible to serve at scale economically. IOW everyone will have at least two subscriptions to different frontier labs and more likely three. | | |
| ▲ | sofixa 2 days ago | parent [-] | | You're actually reinforcing my point. Models are interchangable and easy to switch between to adjust based on needs and costs. That means that no individual model / model provider has any sort of serious moat. If tomorrow Kimi release a model better at something, you'd switch to it. | | |
| ▲ | WarmWash 2 days ago | parent | next [-] | | It's likely that Chinese models will get regulatory knee-capped at some point, and the domestic labs all have pretty common costs they need to make up. This creates an environment where they match each other as prices climb. Unless Google/Meta suffocates the startups since they have actual cash flow that is non-AI. Sure you can go local, but lets be real, that would be <1% of users. | |
| ▲ | baq 2 days ago | parent | prev [-] | | Yes, in that sense, technically correct. I postulate in practice this won't matter since the space of use cases is so large if Kimi released the absolutely best model at everything they wouldn't be able to serve it (c.f. Mythos). |
|
|
|
|
|
| |
| ▲ | waysa 2 days ago | parent | prev | next [-] | | It's almost like they want me to switch to the Chinese clones - which they consider malicious actors. | |
| ▲ | throwaway613746 2 days ago | parent | prev | next [-] | | [dead] | |
| ▲ | aurareturn 2 days ago | parent | prev [-] | | Aren't they just doing what Hacker News was trying to tell them to do? That AI is useful but not sure if sustainable. Now they're increasing prices and decreasing tokens and you guys are pissed off. | | |
| ▲ | freedomben 2 days ago | parent [-] | | I feel this has to be said constantly, though I hate doing it. hn is not a monolith. People here routinely disagree with each other, and that's what makes it great | | |
| ▲ | aurareturn 2 days ago | parent [-] | | I'm aware. When I say "Hacker News", I mean a very sizable portion of users who keep repeating the OpenAI collapse imminent opinion. | | |
|
|
|
|
| ▲ | arjie 2 days ago | parent | prev | next [-] |
| Oh that's interesting. Right after they signed the deal with Amazon so maybe it was all compute constrained. In any case, I tried using the Codex $20/mo plan and the limits are so low I can hardly get anywhere before my agent swaps to a different agent. Somewhat suspicious that if I do this without an official Anthropic notice I'll lose my precious Max $200/mo account so I'll sit tight perhaps for a while. |
| |
| ▲ | theshrike79 2 days ago | parent | next [-] | | Wait, how? I had an idea on a whim to vibe-engineer an irccloud replacement for myself. Started with claude web + Opus 4.7 and continued with Claude Code. Ate up two full cycles of my quota in maybe 6-10 prompts. Then I iterated on that with pi.dev+codex for HOURS, managed to use 50% of my Codex Pro subscription. | | |
| ▲ | layoric 2 days ago | parent [-] | | Yeah, I tried Codex pro today and the $20 plan is way more generous than Claude's, especially lately. | | |
| ▲ | theshrike79 2 days ago | parent [-] | | I've had the cheapest personal tier for both since forever and I think I've run out of Codex quota _once_. With Claude it's a constant battle of typing /usage after every iteration and trying to guess if it's enough for the next task or not =) |
|
| |
| ▲ | jauntywundrkind 2 days ago | parent | prev | next [-] | | GPT-5.4 brutally consumptive for sure. It's not very verbal, but gpt-5.3 codex is wildly smart about coding & planning, and way way less token hungry. | |
| ▲ | rustyhancock 2 days ago | parent | prev | next [-] | | Consider Z.ai if you need "bulk" usage, GLM is now very good. They still have the occasional API brown out however. I used to use GLM mostly and had a Claude Pro subscription for occasional review and clean up. Now I just use GLM. I do think Claude Max is value for money. But it's more value than I personally need and I like Anthropic less and less. | | |
| ▲ | zurfer 2 days ago | parent [-] | | Naive question but are you not afraid z.ai will train on your personal data? | | |
| ▲ | azuanrb 2 days ago | parent | next [-] | | FAANG already did this all the time isn't it? Regardless of their policy. US is no better than China from my point of view. In this case, I see no difference between sending my prompts to US or China companies. At least China models are open source. | | |
| ▲ | nullbyte 2 days ago | parent [-] | | I guess it depends if you are working on something important to national security. Especially corporate codebases, etc. |
| |
| ▲ | rustyhancock 2 days ago | parent | prev [-] | | I accept that all the providers will do what I would consider unethical with my data and simply don't expose what I don't consider a price of doing the business I want. The other criticism I see is "ask it what happened in 1989" but as a my use case isn't writing a high school history essay I simply don't care. Or believe one should seek those kind of answers from any AI. (If you're curious it simply cuts off the reply). I fully appreciate that YMMV and what sits right for others will not align with what's acceptable to me. Anthropic and OpenAI both are in my badbooks as much as Z.ai. pick your poison as they say. |
|
| |
| ▲ | dbbk 2 days ago | parent | prev [-] | | They said from the beginning it was compute constraint and that OpenClaw was causing way more usage than they could handle |
|
|
| ▲ | walthamstow 2 days ago | parent | prev | next [-] |
| OpenClaw says Anthropic says it's OK. Well, that's crystal clear then. |
|
| ▲ | languagehacker 2 days ago | parent | prev | next [-] |
| My OpenClaw assistant (who's been using Claude) lost all his personality over the last week, and couldn't figure out how to do things he never had any issues doing. I racked up about $28 worth of usage and then it just stopped consuming anymore, so I don't know if there was some other issue, but it was persistent. I got sick of it and used a migration script to move my assistant's history and personality to a claude code config. With the new remote exec stuff, I've got the old functionality back without needing to worry about how bleeding-edge and prone to failure OpenClaw is. I feel like this is what their plan was all along -- put enough strain and friction on the hobbyist space that people are incentivized to move over to their proprietary solution. It's probably a safer choice anyway -- though I'm sure both are equally vibe-coded. |
| |
| ▲ | andai 2 days ago | parent | next [-] | | I thought the reason OpenClaw was banned because of the strain it's putting on the systems. (Well, 3rd party stuff was already illegal, and I believe remains so (sorta-kinda tolerated now? with the extra usage[0]) but enforcement seemed to be based on excessive usage of subs.) Doing the same thing but with 50K of irrelevant, proprietary system prompt, doesn't seem to improve the situation! i.e. my question here is: if you replicate OpenClaw with `claude -p prooompt` and cron, is Anthropic happy? (Or perhaps their hope is that the people able and willing to do that represent a rounding error, which is probably true.) [0] https://news.ycombinator.com/item?id=47633568 | |
| ▲ | scottyah 2 days ago | parent | prev | next [-] | | Well when the middleman between you and your users is bought out by the competitor, it makes sense to move away from it. It's a bit like Apple selling iPhones in a Microsoft store. | |
| ▲ | Skidaddle 2 days ago | parent | prev [-] | | What does your Claude Code implementation of OpenClaw look like? | | |
| ▲ | languagehacker 18 hours ago | parent | next [-] | | I used https://github.com/Kevjade/migrate-openclaw, and then started running Claude Code with remote exec against an empty folder that I've advised it to start adding new memories into. So far, my bot's personality is back, and it can utilize the same skills as before, which is was failing on last week. I don't have an especially heavyweight implementation, because I only use mine to review things I've written in my Apple Notes (journaling of various kinds, mostly) and give insights. | |
| ▲ | andai 2 days ago | parent | prev [-] | | Not who you asked but I slapped this together in 100 lines of code and you may find it useful. It's just `claude -p proompt` (or indeed, `codex exec prooompt` inside a Telegram bot. (Was annoyed by NanoClaw's claim that it was 500 lines, so tried my own hand at it ;) No memory, no cron/heartbeat, context mgmt is just "new chat", but enough to get you started. Note: no sandboxing etc, I run this as unprivileged linux user. So it can blow up its homedir, but not mine. Ideally, I'd run it on a separate machine. (My hottest take here is "give it root on a $3 VPS, reset if it blows up" ;) https://github.com/a-n-d-a-i/ULTRON You may also enjoy CLIProxyAPI, which does the same thing (claude -p / codex exec) but shoves a OpenAI compatible API around it. Note: this probably violates every AI company's ToS (since it turns the precious subsidized subscription tokens into a generic API). OpenAI seems to tolerate such violations, for now, because they care about good. Anthropic and Google do not. (Though Anthropic may auto-detect and bill it as extra usage; see elsewhere in this thread. Situation is very confusing right now.) https://github.com/router-for-me/CLIProxyAPI |
|
|
|
| ▲ | dmazin 2 days ago | parent | prev | next [-] |
| I got sick of the inconsistency caused by Anthropic tinkering with Claude Code and had canceled my 20x. My plan was to switch to Codex so I could use it in Pi. I am specifically talking about switching because of the harness, not model quality. Anyone else match my experience? I wonder how many other people recently did the same. It would be prudent of Anthropic to let people use Pro/Max OAuth tokens with other harnesses I think. Even though I get why they want to own the eyeballs. |
| |
| ▲ | redrove 2 days ago | parent | next [-] | | I’ve been using Codex Pro since they lobotomized Opus 4.6. Codex is so much better, GPT 5.4 xhigh fast is definitely the smartest and fastest model available. For a while there I had both Opus 4.6 and Codex access and I frequently pitted them against each other, I never once saw Opus come out ahead. Opus was good as a reviewer though, but as an implementer it just felt lazy compared to 5.4 xhigh. One feature that I haven’t seen discussed that much is how codex has auto-review on tool runs. No longer are you a slave to all or nothing confirmations or endless bugging, it’s such a bad pattern. Even in a week of heavy duty work and personal use I still haven’t been able to exhaust the usage on the $200 plan. I’ll probably change my mind when (not IF) OpenAI rug pull, but for spring ‘26, codex is definitely the better deal. | | |
| ▲ | walthamstow 2 days ago | parent | next [-] | | I also made the switch to OpenAI, the $20 plan, I dunno about "so much better" but it's more or less the same, which is great! The models and tools levelling out is great for users because the cost of switching is basically nil. I'm reading people ITT saying they signed up for a year - big mistake. A year is a decade right now. | | |
| ▲ | redrove 2 days ago | parent | next [-] | | I underscored using xhigh + fast mode when saying it’s so much better. Now with Opus 4.7 of course the “burden” of adjusting reasoning effort has been taken away from you even at the API level. In my experience people don’t change the thinking level at all. | |
| ▲ | sitkack 2 days ago | parent | prev [-] | | What issues did you consider about sending your code base to OpenAI? | | |
| ▲ | walthamstow 2 days ago | parent [-] | | None mate. Code is cheap, it's not worth anything any more, especially not my little personal projects |
|
| |
| ▲ | Scotchy 2 days ago | parent | prev [-] | | Any alternative to Claude Design ? Tried Figma with Opus 4.6 but it doesn't come close in my experience. Codex is abysmal for UI design imo. | | |
| ▲ | dgb23 2 days ago | parent | next [-] | | It really depends on what you‘re trying to do and what your skillset is. But if you go information architecture first and have that codified in some way (espescially if you already have the templates), then you can nudge any agent to go straight into CSS and it will produce something reasonable. | |
| ▲ | joelmanner 2 days ago | parent | prev | next [-] | | I've been using paper.design and it's been working well for me via mcp on claude code | |
| ▲ | makingstuffs 2 days ago | parent | prev | next [-] | | Have you tried stitch.withgoogle.com? | | | |
| ▲ | gbalduzzi 2 days ago | parent | prev | next [-] | | I created some decent prototypes with stitch but I don't know how it compares to claude design | | | |
| ▲ | StrangeSound 2 days ago | parent | prev [-] | | Google Stitch |
|
| |
| ▲ | tommica 2 days ago | parent | prev | next [-] | | I left anthropic a while ago because of the similar shenanigans they had earlier. I went with opencode & zen. I still have their subscription, but am using pi now, mainly because something happened that made my opencode sessions unusable (cannot continue them, just blanks out, I assume something in the sqlite is fucked), and I cannot be bothered to debug it. For what I use the agents, the Chinese models are enough | | |
| ▲ | hboon 2 days ago | parent [-] | | Doesn't using pi be against their terms of use about having to go through Claude Code cli for all Max plan usage? (I had use Droid with Max previously, it was a great combo). | | |
| ▲ | the_mitsuhiko 2 days ago | parent | next [-] | | It's unclear right now. The current stance is that using pi or other coding harnesses eats into extra usage and that is the behavior one sees today. We have added a hint to pi now that warns you when you use an anthropic sub. | | | |
| ▲ | tommica 2 days ago | parent | prev [-] | | Probably - it was that kind of confusion that resulted in me switching providers. Plus I like being able to switch a model. |
|
| |
| ▲ | resonious 2 days ago | parent | prev | next [-] | | I also cancelled my 20x and switched to Codex. At this point even the Codex CLI seems to perform better than Claude Code... And so far I'm on the OpenAI Pro plan and haven't even needed to upgrade to their $100/mo plan. I'm getting more value for almost 10x cheaper. | |
| ▲ | hboon 2 days ago | parent | prev | next [-] | | I switched to Droid+Opus (with Claude Max) many months ago and it was my favorite combo. Had to stop because they don't like us proxying requests anymore. | |
| ▲ | athrowaway3z 2 days ago | parent | prev | next [-] | | I've been on pi for a few months now, build a custom tmux plugin so i can use nested pi and mix and match codex / claude instances. pi has been the better harness out of all the ones i tried, first and third party. Ever since the Anthropic block i've just canceled all my claude subs. Used to be codex was a bit worse, now they're practically equal. Claude is slightly better at directing other agents but the difference is too minor and not worth the money. Claude usage limits / costs are absurd. Any 'principles' people praise anthropic for are not that relevant to me anyways because i'm not a US citizen. | |
| ▲ | serial_dev 2 days ago | parent | prev | next [-] | | My experience is the opposite of this thread's consensus. Context: Full time SWE, working on large and messy codebase. Not working on crazy automations, working on fixing bugs, troubleshooting crashes, implementing features. Anthropic models write much better code, they are easy to follow, reasonable and very close to what I would done if I had the time... OpenAI's on the other hand generate extremely complex solutions to the simplest problems. I was so disappointed by non-Anthropic models, that for a couple of weeks I only used Anthropic models, but based on this thread, I'll go back and give it another try. It's good to go back and try things again every couple of weeks. Of course, I was annoyed that they lobotomized 4.6, the difference was day and night, and Anthropic is certainly not a company I trust. In my opinion, it shows their willingness to rugpull, so I'm looking at other approaches. Since 4.7, things went back to normal, things you'd expect to work just work. | | |
| ▲ | yokoprime 2 days ago | parent [-] | | I feel like Opus 4.7 vs GPT 5.4 is pretty much just flavor variants, the big difference is in the harness. I like the Claude Code CLI better than the Codex CLI, it just clicks with how I like to interact with agents. The codex app on the other hand is better than the Claude app in code view, so if I had to stick to an app it would be codex all the way. |
| |
| ▲ | ai-tamer 2 days ago | parent | prev | next [-] | | (Disclosure: I work on tamer, an OSS supervisor for coding agents — biased.)
Add one more to the count. The OAuth-across-harnesses idea would help, but it doesn't fix the shape of the problem.
"Harness" has always felt off to me. Exoskeleton is closer — Claude Code, Codex, opencode wrap the model and augment it from the inside.
What's missing is a layer above that's explicitly not an exoskeleton: a thin supervisor. A master that watches and guides, nothing more. It just relays I/O and hands approval back to the human. | |
| ▲ | uvu 2 days ago | parent | prev | next [-] | | Same, I am from 5x plan and cancel and switched to codex as I want to use Pi. | |
| ▲ | KronisLV 2 days ago | parent | prev | next [-] | | > I wonder how many other people recently did the same. Some negative signal for better overall view on things: I'm still with Anthropic and will probably stay with them for the foreseeable future. I think after DoD/DoW shenanigans (which in of itself felt like a reasonable take on the part of Anthrpic) they got a bunch of visibility and new users, so them hitting some scaling limits is pretty much inevitable - so some service disruption is inevitable. Couple this with the tokenizer changes and seeming decrease in model performance (adaptive thinking etc.), and lots of people will be rightfully pissed off, alongside increased downtime (doesn't matter that much for me, definitely does matter for anything time-sensitive). At the same time, in practice I've only seen it do stupid things across 8 million tokens about 5 times (confusing user/assistant roles, not reading files that should be obvious for a given use case, and picking trivially wrong/stupid solutions when planning things), alongside another 4 times that tests/my ProjectLint tool caught that I would have missed. The error rate is still arguably lower than mine, though I work in a very well known and represented domain (webdev with a bunch of DevOps and also some ML stuff, and integration with various APIs etc.). At the same time, the 85 EUR they gave to me for free has been enough to weather the instability in regards to pricing changes and peak usage. They've fixed most of the issues I had with Claude Code (notably performance), and the sub-agent support is great and it's way better than OpenCode in my experience. They also keep shipping new features that are pretty nice, like Dispatch and Routines and Design, those features also seem nice and not like something completely misdirected, so that's nice. The Opus 4.7 model quality with high reasoning is actually pretty nice as well and works better than most of the other models I've tried (OpenAI ones are good, I just prefer Claude phrasing/language/approaches/the overall vibe, not even sure what I'd call it exactly, all the stuff in addition to the technical capabilities). At the same time, if they mess too much with the 100 USD tier, I bet I could go to OpenAI or try out the GLM 5.1 subscription without too many issues. For now they're replacing all the other providers for me. Oh also I find the subscription vs API token-based payment approach annoying, but I guess that's how they make their money. | |
| ▲ | benjx88 2 days ago | parent | prev [-] | | Because the Harness is the Moat and key IP not the Models themselves that is the why! now for both OpenAI and Anthropic with all their money raised and the compute they acquire and have in the books of course no one can easily replicate, whom can afford all those datacenters and Nvidia GPUs interconnected is why OpenAI throws you a bone and gives you an Open Source SDK Harness but not the one they actually use for ChatGPT. But now both of them have to deliver and do all the bull-shet they said this models can do... truth is they cannot. So now the bubbles burst and we will see what happens. We all have to buy iPhones or MacBooks so that makes sense, we all use Chrome or Google Search, Instagram, TikTok. All these models and agents are shortcuts for all of us to be lazy and play games and watch YouTube or Netflix because we use them to work-less, well the party will be over soon. |
|
|
| ▲ | rcarmo 2 days ago | parent | prev | next [-] |
| PSA: Since you are still required to use Claude Code and I have had a bunch of non-technical people asking me to make https://github.com/rcarmo/piclaw based on Claude rather than pi (which is never gonna happen), I have started pivoting its Python grand-daddy into a Go-based web front-end that runs Claude as an ACP agent. Still early days, but code is available, sort of works if you squint, and welcomes PRs: https://github.com/rcarmo/vibes/tree/go |
|
| ▲ | eknkc 2 days ago | parent | prev | next [-] |
| I’ve been using codex cli and GPT 5.4. It is better at coding than Opus anyway. I did not really test Opus 4.7 but older versions generated worse results compared to GPT. Which I would not even try and test though if Anthropic did not ban my account. The shadiest thing I did was to use it with opencode for a while I think. Never installed claw or used CC tokens somewhere else. This is a weird company doing weird shit. |
|
| ▲ | throwup238 2 days ago | parent | prev | next [-] |
| I don’t think I’ve seen a more confused and shambolic product strategy since Google’s absurd line of GChat rebrandings. Last year I was excited about the constant forward progress on models but since February or so its just been a mess and I want off this ride. Either way I’m going to wait for “official” word from Anthropic, which I guess at this point will probably be a “Tell HN” or Reddit text post or a Xitter from some random employee’s personal account, because apparently that’s the state of corporate communication now. |
| |
|
| ▲ | dhoe 2 days ago | parent | prev | next [-] |
| I didn't even use openclaw and Anthropic disabled my account without explanation beyond "suspicious signals". If anyone found a way to get out of that, I'd be curious to hear it - genuinely no idea what I did wrong, and the Google docs form I filled out to appeal never got me any reply. |
| |
| ▲ | mondojesus 2 days ago | parent [-] | | Same thing happened to me in January. Never heard back from them after submitting the google form. A few weeks ago I went through the subscription flow again and the 'account disabled' message was no longer there. Didn't go through with the payment so it's possible I would have been blocked at that point but it looked like my account had been re-enabled. I think you just have to play the waiting game unfortunately. | | |
| ▲ | nitroedge a day ago | parent [-] | | why not re-apply with just a new email account instead of using the old email they banned? |
|
|
|
| ▲ | skapadia 2 days ago | parent | prev | next [-] |
| Basically, as long as you are using an Anthropic library or tool, you can use your OAuth credentials. For example, you can use the Claude Agent SDK with your OAuth credentials. This is sweet because I can prototype all sorts of agents with Claude Code embedded inside, at a predictable monthly cost. One nice use case is turning skills into standalone tools or apps. You can also do convoluted things like run Claude Code within tmux and send input to it and read the output. MCP Channels are interesting too for bidirectional communication between your app and a running Claude Code instance, with an MCP server sitting in between. It's slow, but allows for some interesting use cases when you want to step out of an existing CLI session to do work that is easier in a graphical interface, have Claude Code respond and do work, then when you're done, go back to the CLI session and continue, never losing context. |
| |
| ▲ | camkego a day ago | parent | next [-] | | The way I read the Anthropic docs, it seems the term plan is to block the usage of OAuth credentials with the "Claude Agent SDK". This URL: https://code.claude.com/docs/en/agent-sdk/overview Says this:
"Unless previously approved, Anthropic does not allow third party developers to offer claude.ai login or rate limits for their products, including agents built on the Claude Agent SDK. Please use the API key authentication methods described in this document instead." Again, it seems Anthropic prefers to bill API token rates (long run), not subscriber effective token rates. | |
| ▲ | Cidan 2 days ago | parent | prev | next [-] | | You don't really need to tmux at all for Claude Code CLI. Claude Code CLI supports streaming json input, and streaming json output; you can use stdin/out as a pipe to control Claude Code CLI. I'm doing this today in https://github.com/Cidan/ask -- works great. | | |
| ▲ | skapadia 20 hours ago | parent | next [-] | | This is awesome. Thank you for pointing out streaming JSON input and output, I'll definitely be taking a look at that. | |
| ▲ | tber123 2 days ago | parent | prev [-] | | [dead] |
| |
| ▲ | tber123 2 days ago | parent | prev [-] | | [dead] |
|
|
| ▲ | doginasuit 2 days ago | parent | prev | next [-] |
| I'm out of the loop on Claude, hasn't it always been possible to use the Anthropic API with a tool like OpenClaw, paying per request? Is this limitation just for using your monthly subscription account? |
| |
| ▲ | LatencyKills 2 days ago | parent | next [-] | | Many people likely objected to the original restriction because it seemed as though Anthropic was trying to impede the development of competing tools. If I'm paying for compute, why should it matter whether I use Anthropic's harness (e.g., Claude Code) or a 3rd-party harness? | | |
| ▲ | doginasuit 2 days ago | parent | next [-] | | I find it a little bizarre that people have this expectation. You can still pay for compute and use it the way you want by paying for the product you actually want to use. Subscription products like this are not marketed or intended to be used as access to the API, but they also offer access to the API if that's what you want. I'm still not entirely clear why people insist on using their subscription like this, so let me know if I'm missing something. | | |
| ▲ | LatencyKills 2 days ago | parent [-] | | > I find it a little bizarre that people have this expectation. Well, enough people complained that Anthropic reversed their stance. Additionally, their primary competitor doesn't have any compute restrictions, which should help clarify why this decision was made. As someone who has been building ML/AI tools (@ MS & Apple) for almost 25 years, I can say that much of the value of the underlying model comes from the harness. Why shouldn't I be able to use the exact same compute with my own bespoke harness when the compute cost is the same? The Claude Code team continues to push out half-baked features that literally hamper my ability to use their tools. If I'm paying $200/month for compute, I should be able to use it however I like. | | |
| ▲ | doginasuit a day ago | parent [-] | | I'm inclined to agree for that price. Is there something the 200/month subscription gets you that the API doesn't? I still don't know why creating an API key and loading it up with $200 every month is an unfavorable option. Do you expect it would cost more? You might even end up paying less, especially if you can find ways to make it more efficient given that you are using a bespoke harness. I still feel like I'm missing something. If the API costs a lot more for the same amount of usage, that would make sense to me, but that has never been my experience. But I don't have experience with the Anthropic API. | | |
| ▲ | LatencyKills a day ago | parent [-] | | > I still don't know why creating an API key and loading it up with $200 every month is an unfavorable option. If I pay $200/month for a Max subscription, I can access ALL of Anthropic's tools (CC, Design, Cowork, etc.). If I pay $200/month for API access, then the only thing I can use is the API. You don't see how ridiculous that is? No other SOTA model company has these restrictions, which is why Anthropic keeps losing subscribers. | | |
| ▲ | doginasuit 20 hours ago | parent [-] | | > You don't see how ridiculous that is? No other SOTA model company has these restrictions You cannot use a ChatGPT subscription with a CLI tool, if you want to build your own harness you have to go through the API. I'm unsure about Gemini. Claude Code seems to be a special case because it is itself a CLI tool and so it becomes much easier to build a custom harness around, but its not surprising or unusual that it would have restrictions. Subscription products normally have terms of use that limit how you use it that are shaped by the infrastructure they rely on. A harness is often tuned to usage that fits with the constraints of the service, the backend that supports the tool is engineered for that usage. A custom harness could easily bypass that tuning and become unsustainable. On top of that, the API tends to be a much more flexible product to use directly. I can understand why you'd have more expectations paying for the max product, but this doesn't sound unusual or unreasonable to me. | | |
| ▲ | LatencyKills 20 hours ago | parent [-] | | > You cannot use a ChatGPT subscription with a CLI tool Okay, I'm done here. You obviously have no idea how this works (I have a ChatGPT subscription that I use with Codex). I know you're new to HN, but when someone says: I've literally built subscription tools at both Microsoft and Apple for over 25 years, you might want to stop and reconsider if you might be missing something. You are. /blocked |
|
|
|
|
| |
| ▲ | sumedh 2 days ago | parent | prev [-] | | Isn't their argument that third party harness dont play nice with their GPUs which is a fair argument. With Claude Code they can predict what the traffic would look like with third party harness they cannot. | | |
| ▲ | LatencyKills 2 days ago | parent [-] | | If that was the argument, why did they reverse it? Anthropic is constantly destroying goodwill and now seems to be in panic mode. |
|
| |
| ▲ | handfuloflight 2 days ago | parent | prev [-] | | Yes, exactly. |
|
|
| ▲ | jollymonATX 2 days ago | parent | prev | next [-] |
| How can they be this bad at this? What was all that about then? |
|
| ▲ | tristanb 2 days ago | parent | prev | next [-] |
| Maybe it’s allowed because they built the ability to direct the costs to your extra usage budget, not your monthly subscription? |
|
| ▲ | Frannky a day ago | parent | prev | next [-] |
| Claude Code with Opus and the Max plan is fine for me, even though I'm not super happy about moments when it's not available, the costs, account banning, etc. Anyway, what I am looking for and am curious about is if there is a solution that I am overlooking that will work the same, or almost the same or better, but at a cheaper price. I read about people being happy about pi.dev and OpenCode. I tried OpenCode with Mimo V2 pro and it is pretty good. I previously used Qwen CLI before they stopped the free usage, and Gemini CLI. I also used Z.ai with OpenCode. I read about people using Opus for planning and then for non-important stuff moving the agent to use a further cheaper model. I am not into usage-based pricing unless it will be cheaper nonetheless (I doubt it though). Do you have some cool setups to share? I usually do Python for backend and TypeScript frontend. Host on Hetzner, use mostly Docker but also k3s if required. |
|
| ▲ | headcanon 2 days ago | parent | prev | next [-] |
| I've been trying to toe the line here myself, here's how I've been doing it. For context, I pay for a Max 5x subscription. My main goal is to maximize my subscription token usage while trying to comply with the rules, but its not clear where the line is for automation so I feel like I need to be clever. - regular development (most token use): all interactive claude mode, standard use case - automated background development: experimenting with claude routines (first-class feature, on subscription) - personal non-nanoclaw claude automations (claude -p): uses subscription token, but only called as needed (generally just fix something if something in my homelab infra goes does down, its set up to not fire on an exact cron time) - other LLM based automations: usually openrouter API key, cheap models as needed - nanoclaw: all API key based, but since its expensive I keep usage mostly minimal and try to defer anything heavyweight to one of the other automation strategies (nanoclaw mainly just connects my homelab infra with telegram) |
|
| ▲ | darylteo 2 days ago | parent | prev | next [-] |
| Correction: OpenClaw says Anthropic says OpenClaw-style Claude CLI usage is okay again. |
| |
| ▲ | dang 2 days ago | parent | next [-] | | (That's implied by the sitename to the right of the title) | | | |
| ▲ | eterm 2 days ago | parent | prev [-] | | And then recommends to use an API key, which as far as I know was never restricted, it was trying to use the subscription that was prohibited/limited. I'm confused by the comments being full of people swearing off Claude, feels like real HN bubble stuff. |
|
|
| ▲ | est 2 days ago | parent | prev | next [-] |
| https://news.ycombinator.com/from?site=openclaw.ai hot damn |
| |
|
| ▲ | ryanshrott 2 days ago | parent | prev | next [-] |
| The underlying issue here is that OAuth credential reuse is different from API key scraping, and Anthropic hasn't really made that distinction clear. CLI-style usage that respects rate limits and uses official libraries is, architecturally, the same thing Max subscribers do in the web app, same auth, same endpoints. The problem isn't the use case. It's that companies get nervous about anything that looks like it could scale past what they priced in. It'd be a lot cleaner if they just published explicit rate limits for each subscription tier instead of these vague policy statements. |
| |
| ▲ | ghm2180 a day ago | parent [-] | | yeah rate limits are the way to go. I don't see how this is not really simple: A human can only type or tts the text in and read responses only so fast, anthropic can use this as a baseline. They can create a client that can back off(rate limit) and wait, like for something like when a user says something like "spawn 10 processes with claude -p to do X" this client can calculate the rate limit and place an event in a queue and developers can use this client's rate-limit and build workflows around it: e.g. a client queue that has a timer event that expires the rate limit that was set can wake up a daemon. There are a million ways to implement the queue <> daemon thingy, its just software at that point. Since the subscription is hard linked to an OAuth token this should be easy to track too. What am i missing? |
|
|
| ▲ | solomonb 2 days ago | parent | prev | next [-] |
| Does this mean you can use openclaw with a Claude Pro account? I'm curious try it but no way i'm going to pay API rates. |
|
| ▲ | subscribed 2 days ago | parent | prev | next [-] |
| That's a very misleading title. Question to the sages: should that submission get flagged because of that? |
|
| ▲ | Frannky 2 days ago | parent | prev | next [-] |
| Why? Did they figure out cheaper compute? Or did they lose a lot of users, and now the compute is there unused? |
|
| ▲ | EFLKumo 2 days ago | parent | prev | next [-] |
| Whether to allow Claude subscription to access other services or not, at this point, anthropic seems to be schizophrenic, sometimes worried about insufficient computing power and sometimes worried about user loss, which is puzzling. |
| |
| ▲ | ralusek 2 days ago | parent | next [-] | | What's puzzling or schizophrenic about that? Those seem like two very natural factors that would be in tension with one another and have to be balanced. | | | |
| ▲ | baobabKoodaa 2 days ago | parent | prev [-] | | Almost seems like business leaders have to balance different aspirations and make tradeoffs. Unbelieveable. | | |
| ▲ | shepherdjerred 2 days ago | parent [-] | | Could they at least have a page somewhere letting us know what we’re allowed to do today? |
|
|
|
| ▲ | RoxiHaidi 2 days ago | parent | prev | next [-] |
| Same, I am from the 3x plan and canceled and switched to Codex 2 days ago... |
|
| ▲ | brandensilva 2 days ago | parent | prev | next [-] |
| The sentient had already sailed. It's hard to trust Anthropic here given the ringer they have dragged us through. Contrast that to what GitHub did which was to pause new customers to ensure quality remained and things were stable. |
|
| ▲ | 2001zhaozhao 2 days ago | parent | prev | next [-] |
| I think they're now charging it as extra usage if you use a custom prompt. In the Claude site they added the option to buy extra usage at 30% off if you buy $1,000 or more at a time, so it's still somewhat cheaper to use OpenClaw with a claude account compared to an API key. (Incidentally the 30% off might mean that choosing a Pro plan + extra usage versus Max plan might make sense for more people) |
| |
| ▲ | NewsaHackO 2 days ago | parent [-] | | Yes, if it is "extra usage," then that just seems like an API with extra steps. | | |
| ▲ | andai 2 days ago | parent [-] | | Yeah I read it as a smart move. Good for PR, and it captures a bunch of value that was being lost. From what I understood, millions of users are making unauthorized usage of the subscription APIs. So they can just capture some of that money, some of them will stay and pay the extra usage. (Many of them aren't knowledgeable about API keys etc, they just used the auto setup in OpenClaw.) The actual rules now are pretty confusing though. De jure illegal, de facto tolerated -- they'll just auto-detect and bill you for it. The claude -p situation, though, confuses me. This one's technically legal but against the spirit of the law. (I think made illegal a few weeks ago since it was being used as a workaround for the OpenClaw ban, and seems to have been re-legalized now?) But they can only extra-bill you for it if they can somehow detect it as invoked by OpenClaw, etc., right? If it's your own harness, it slips thru the cracks? ._. |
|
|
|
| ▲ | F7F7F7 a day ago | parent | prev | next [-] |
| Just as they pull the rug on $20 users. This is the most transparently nontransparent company in tech existence. They always reveal their cards in the most clumsy ways possible. Their enterprise API numbers must be godlike for the way they are treating B2C customers. |
|
| ▲ | swyx 2 days ago | parent | prev | next [-] |
| a more authoritative source (aka a tweet) woudl be nice. |
|
| ▲ | francofx3 18 hours ago | parent | prev | next [-] |
| Google AI Pro just banned me for using it on pi.dev. "Failed to sign in. Message: This service has been disabled in this account for violation of Terms of Service. Please submit an appeal to continue using this product." |
|
| ▲ | djyde 2 days ago | parent | prev | next [-] |
| 题外话,你们不觉得在 openclaw 里用 claude 相当浪费 token 吗? |
|
| ▲ | linsys 2 days ago | parent | prev | next [-] |
| I'm surprised by this actually but OpenClaw is trash anyway. |
|
| ▲ | spectaclepiece 2 days ago | parent | prev | next [-] |
| What models have you guys tried to use with OpenClaw that you've found suitable for the task? Codex personally rules for my dev style but not sure how well it works in the claw scenario. |
|
| ▲ | fudged71 2 days ago | parent | prev | next [-] |
| Is there a way to use Anthropic subscription with hermes-agent? |
|
| ▲ | OG_BME 2 days ago | parent | prev | next [-] |
| This is only useful when you are using Claude Cli fairly regularly on the same machine as OpenClaw, right? Because the tokens need to be refreshed manually every so often? |
|
| ▲ | nullbyte 2 days ago | parent | prev | next [-] |
| Does that include OpenCode? That's what I care about most and it's the primary reason I've been sticking with OAI the past few months. |
|
| ▲ | johnsmith1840 2 days ago | parent | prev | next [-] |
| Sounds like they are scaring off any startup trying to build a product like this before anthropic can ship their own. |
|
| ▲ | arjunthazhath a day ago | parent | prev | next [-] |
| Anthropic guys are playing with branding and now the very service |
|
| ▲ | croes 2 days ago | parent | prev | next [-] |
| Correct title: OpenClaw says Anthropic said OpenClaw-style Claude CLI usage is allowed again |
|
| ▲ | jorisboris 2 days ago | parent | prev | next [-] |
| Swapped my OpenClaw to Claude again. I played around with Gemini and Chinese models in past month but it didn’t work for me. |
|
| ▲ | bilalbayram 2 days ago | parent | prev | next [-] |
| Anthropic is trying so hard to be Apple they are doing all the mistakes Apple made during its first day |
|
| ▲ | imron 2 days ago | parent | prev | next [-] |
| Can we get OpenCode support back as well? |
|
| ▲ | jedisct1 2 days ago | parent | prev | next [-] |
| And tomorrow, it won't be allowed any more and accounts will be closed without prior notice. Use something else. |
|
| ▲ | waynevdm 2 days ago | parent | prev | next [-] |
| Did they disable this to give them time to come out with their own agent? |
|
| ▲ | saltyoldman 2 days ago | parent | prev | next [-] |
| I guess it doesn't matter any more, everyone bought all the mac minis |
|
| ▲ | segmondy 2 days ago | parent | prev | next [-] |
| They see that the new KimiK2.6 will eat their lunch. They don't care about you, they just care about your money and will take away your options if they don't believe you have a solid alternative. |
|
| ▲ | yogigan 2 days ago | parent | prev | next [-] |
| Feels like the real issue isn’t policy but pricing models |
|
| ▲ | josephd79 2 days ago | parent | prev | next [-] |
| so if i use openclaw style cli that looks like opencode ai or other agentic style applications then that would be acceptable? |
|
| ▲ | kordlessagain 2 days ago | parent | prev | next [-] |
| This title is ridiculous and needs to be fixed. |
|
| ▲ | mlitwiniuk 2 days ago | parent | prev | next [-] |
| This is a perfect example of how quickly you can burn through trust that took a long time to earn.
I used to be - in my small circle of friends and peers - a genuine advocate for Anthropic and Claude. It was my sole AI assistant for over a year. But somewhere around February/March, something shifted. Declining quality, policy changes, inconsistent output. Nothing dramatic, just... a slow erosion. That erosion pushed me to try Codex. I signed up for their most expensive pro plan. Now I'm about to experiment with Kimi. I'm not saying they're better (well, sometimes they are). But here's the thing - what Anthropic did is they made me look. They made a loyal customer start shopping around. And I think that's the worst thing you can do. Having said that - as an LLM provider for my product, we're staying with Claude. I still trust in their ethics. Please don't prove me wrong. |
| |
| ▲ | layoric 2 days ago | parent | next [-] | | I'm trying out codex for first time as well cause something up with Claude for sure, 4.7 has been super frustrating. For other models, highly recommend trying MiniMax 2.7, using it with Hermes is actually pretty good, and their token subscription plans include a lot of usage for $10. | | |
| ▲ | mlitwiniuk 2 days ago | parent [-] | | Perfect, thanks. Codex app sucks, but I've been exploring opencode for that. Will try MiniMax! |
| |
| ▲ | kilroy123 2 days ago | parent | prev | next [-] | | Same here. I've been on the Claude Max 20x plan for a while. Now I'm really giving codex a try and looking at the cheaper models as well. | |
| ▲ | baq 2 days ago | parent | prev [-] | | Enshittification 101, codex is undergoing the same thing on a 3 month lag. | | |
|
|
| ▲ | gregman1 2 days ago | parent | prev | next [-] |
| Canceled anyway. |
|
| ▲ | GodelNumbering 2 days ago | parent | prev | next [-] |
| How about third party coding harnesses? |
| |
| ▲ | aqme28 2 days ago | parent [-] | | Or Claw-like harnesses that we make ourselves? It takes honestly like 15 minutes to roll your own, so I did it thinking "well, hopefully it's not considered third party" | | |
| ▲ | sitkack 2 days ago | parent [-] | | I do claw like things all the time. Give CC an
API document and it figures out how to take a snapshot of the data. Pulls it down and does an analysis. |
|
|
|
| ▲ | imhoguy 2 days ago | parent | prev | next [-] |
| Would that apply to OpenCode too? |
|
| ▲ | giancarlostoro 2 days ago | parent | prev | next [-] |
| Uh, what? For the love of God can I make my own harness or not? Or is this just saying you can use it only in API mode? I have had some ideas for a custom harness (like embedding some tools OOTB and replacing slow tooling) but these policies throw me off. Instead I use local models. Problem is API costs are insane. I have toyed with the idea of running a local model that works with Claude Sonnet or even Haiku, and I know this has been done by others. |
|
| ▲ | sergiopreira 2 days ago | parent | prev | next [-] |
| Anthropic keeps conflating two distinct strategies — be the best model for developers to build on, or be the company that ships Claude Code. Those two have opposite policy conclusions. Restricting third-party harnesses maximizes Claude Code revenue; allowing them maximizes model-layer lock-in through developer habit. The whiplash is the symptom of not picking. Pick for crying out loud! |
|
| ▲ | notShabu a day ago | parent | prev | next [-] |
| While I get that unclear communication is frustrating especially if there are sunk costs, the core issue seems to be that they can't afford to subsidize all these tokens for free. The agent model breaks the power-user/casual-user proportion that makes the existing saas-like pricing work. |
|
| ▲ | mentalgear 2 days ago | parent | prev | next [-] |
| Bad Decision. |
|
| ▲ | Rover222 2 days ago | parent | prev | next [-] |
| Probably somewhat worried about users shifting to the Grok API if they have to |
|
| ▲ | darrenc81 2 days ago | parent | prev | next [-] |
| Great so now we can all look forward to Claude progressively getting reduced limits again. How long till the $1000 ultra plan... or they just want us all paying API credits instead |
| |
|
| ▲ | basisword 2 days ago | parent | prev | next [-] |
| The problem is these tools are so important I'm never going to risk Anthropic blocking my account now after the last debacle. So I'll be used OpenAI with OpenClaw. Hard to win back trust. |
|
| ▲ | 2 days ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | jarym 2 days ago | parent | prev | next [-] |
| Pfft. Damage done, users know that Anthrophic will pull the rug from under them again if given half a chance. So yea, plan accordingly. |
| |
|
| ▲ | amazingamazing 2 days ago | parent | prev | next [-] |
| Guess they saw the growth of their growth shrink dramatically lol |
| |
| ▲ | garganzol 2 days ago | parent [-] | | More people flocked to Codex and found out that it's not worse, and sometimes superior. |
|
|
| ▲ | vibly 2 days ago | parent | prev | next [-] |
| Hmm. Is this real?? If so, it's actually amazing news lol |
|
| ▲ | Havoc 2 days ago | parent | prev | next [-] |
| Same PR strategy as the US administration lol |
|
| ▲ | _pdp_ 2 days ago | parent | prev | next [-] |
| Good luck on that opus plan. |
|
| ▲ | jbrooks84 2 days ago | parent | prev | next [-] |
| Lol, no thanks |
|
| ▲ | KaiShips a day ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | 11pyo 2 days ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | nexustoken 2 days ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | maxbeech a day ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | digdatechAGI a day ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | throwaway613746 2 days ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | openclawclub 2 days ago | parent | prev [-] |
| [flagged] |
| |