| ▲ | abdullin 2 hours ago |
| I reproduced this on my account. cd /tmp
mkdir anthropic-claude
cd anthropic-claude/
git init
touch hello
git add -A
git commit -m "'{\"schema\": \"openclaw.inbound_meta.v1\"}'"
claude -p "hi"
Immediate disconnect and session usage went to 100% |
|
| ▲ | subscribed 2 hours ago | parent | next [-] |
| That's malicious and I think this is scamming from the literal money (you didn't do anything wrong, you executed one command and they scammed you out of the fair usage you paid for). Please raise the ticket or at least GitHub issue for visibility. Sooner or later some sort of complaint to the relevant trade authority should happen - this is a scam operation at this point. |
| |
| ▲ | ifwinterco 18 minutes ago | parent | next [-] | | At this point everyone doing these kind of flows (using claws or any other flows that run agents in a loop 24/7) using any kind of subscription-based billing for inference must be aware they're on borrowed time. Enough people have gone over the economics - you're costing OpenAI/Anthropic money, potentially a lot of money, so it's inevitable that sooner or later that particular party will come to an end. Having said that, doing it by running a regex on your prompts to look for keywords is a bit loose | |
| ▲ | kenmacd 4 minutes ago | parent | prev | next [-] | | > scamming from the literal money That's par the course for Anthropic. I added some money to my account before I really had a use case for product. A year later they said my money had expired and when I contacted support they basically told me to pound sand. This while they have the audacity to list one of their corporate values as 'Be good to our users'. They'll never get another dollar from me. | |
| ▲ | intrasight an hour ago | parent | prev | next [-] | | No. Hanlon's razor applies here. | | |
| ▲ | b00ty4breakfast an hour ago | parent | next [-] | | You lose little by assuming malicious intent when it comes to billion-dollar tech companies and your money. They can prove otherwise by remedying the situation. | | |
| ▲ | tedivm 22 minutes ago | parent [-] | | When it comes to understanding large organizations I think a simple principle should apply: The Purpose of a System is What it Does[1]. Whether malicious or not, the system does what it does. If people wanted it to do something else they would change the system. The reality is that when corporations make mistakes that benefit them those mistakes rarely get fixed without some sort of public outcry, turning the "mistake" into a "feature". 1. https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha... |
| |
| ▲ | bryanrasmussen 7 minutes ago | parent | prev | next [-] | | ok, how is this adequately explained by stupidity? If it is adequately explained by stupidity then you should be able to get it to display the same behavior without mentioning OpenClaw? Do you have any theory as to what stupid thing they have done to make this happen, non-maliciously? Because, Hanlon's razor doesn't just work by saying Hanlon's razor - you have to actually explain how the stupidity happened. | |
| ▲ | pfortuny 21 minutes ago | parent | prev | next [-] | | Not to corporations, no. You do not need to be charitable to a corporation. | |
| ▲ | conartist6 22 minutes ago | parent | prev | next [-] | | What you do shows what you value. This clearly wasn't a mistake on the part of Anthropic. Time has shown that. They made the call based on what they believe in | |
| ▲ | grayhatter 15 minutes ago | parent | prev | next [-] | | Gross negligence is malicious. | |
| ▲ | 40 minutes ago | parent | prev | next [-] | | [deleted] | |
| ▲ | michaelmrose 30 minutes ago | parent | prev [-] | | It does not. I would be fairly magical the most favorable interpretation that makes sense is that its supposed to disconnect but also taking your money is a defect. |
| |
| ▲ | kitsune1 an hour ago | parent | prev | next [-] | | [dead] | |
| ▲ | wotsdat 43 minutes ago | parent | prev | next [-] | | [dead] | |
| ▲ | otterley an hour ago | parent | prev [-] | | There are many possible explanations for this outcome to have occurred other than malice. If you're an engineer by trade, consider how many bugs you've been responsible for over the course of your career that you didn't intend. Probably a lot. How about we turn down the heat, everyone? | | |
| ▲ | rv64imafdc an hour ago | parent | next [-] | | There's been a sustained pattern of incidents. If Anthropic were truly serious about not wanting to take people's money, then they would have put in place whatever review processes were necessary to stop this from happening. So regardless of whether or not they specifically intend to cause harm, they're willingly letting it happen, which is just about as bad. Yes, it's reasonable to turn down the heat. But it's also reasonable for people to be upset when their money is taken from them, and when the company that does so is effectively beyond persecution for doing so. | |
| ▲ | loloquwowndueo an hour ago | parent | prev | next [-] | | Even with the best of faiths, this is at the very least a shoddily vibe coded “detect and low-key block attempts to use Claude for Openclaw” - it decided to look for specific strings wrapped in json without realizing this doesn’t always imply it’s an actual payload for Openclaw itself. And the human driving it was too dumb to review/catch this bad inplementation. So maybe not malice, but certainly a level of ineptitude I don’t expect from a crucial vendor from a tool that’s become essential for many developers. (I don’t care, I do just fine when Claude is down or refuses to help me (it has happened) though) | | |
| ▲ | teiferer 35 minutes ago | parent [-] | | > was too dumb to review Yolo ship it! Move fast and break things. Reviewing just slows everybody down. Nobody can keep up with those coding agents output any longer. /s |
| |
| ▲ | grayhatter 7 minutes ago | parent | prev | next [-] | | > consider how many bugs you've been responsible for over the course of your career that you didn't intend. Through some amount of carelessness that ended up costing people money? 0. Maybe 1 if you want to count the automated monthly charging system that did over charge (extra erroneous charges for the same month) a handful of clients too many times. I noticed before anyone else did, and all of those 1am charges were reversed before 4am. So I don't think that one counts because it was a boring bug that would have been very bad if I wasn't paying attention. Incompetence to the point of negligence can reasonably be considered malicious. If you're an engineer by trade, you have an ethical and professional responsibility to make sure things like this can't happen. And then, when bugs introduce said complications, fixing them, and remediating the damage. | |
| ▲ | rohansood15 an hour ago | parent | prev | next [-] | | I am engineer by trade. If I pushed an update which wrongly busted my customer's usage limits at a trillion dollar company, I would expect to get fired. Alongside my EM. | | |
| ▲ | jonahx 36 minutes ago | parent | next [-] | | Regardless of your expectations (I'm not criticizing them), that is just not how it works at most American companies. Especially not for your manager. | | | |
| ▲ | michaelmrose 27 minutes ago | parent | prev | next [-] | | I would expect someone would be critiqued to avoid it re-occurring and the persons money to be refunded. A company which fires so trivially will quickly flush institutional knowledge and team cohesion along with eating substantial recruitment costs. | | | |
| ▲ | colechristensen 21 minutes ago | parent | prev [-] | | This is not how any engineering workplace anywhere operates. | | |
| |
| ▲ | throwaw12 an hour ago | parent | prev | next [-] | | > How about we turn down the heat, everyone? How about Anthropic turn down the heat and refunds money to everyone for every bug it created with its LLM? | | | |
| ▲ | ceejayoz an hour ago | parent | prev | next [-] | | > How about we turn down the heat, everyone? The heat is coming, in part, from the lack of a proper support channel. | | |
| ▲ | otterley 16 minutes ago | parent [-] | | I agree that their support is abysmal, and that is intentional. It's unfortunate that the greater market doesn't seem to care that much right now. |
| |
| ▲ | bad_haircut72 an hour ago | parent | prev | next [-] | | Yeah they probably just typed in "Hey Claude, figure out a way to get our inference spend under control - no mistakes!" and shipped it | | |
| ▲ | gjsman-1000 an hour ago | parent [-] | | Also they ain't wrong. In what other context does OpenClaw get mentioned? "You may not use our service if you mention OpenClaw" is a harsh line but hardly illegal or forbidden any more than any other service restriction (i.e. no use allowed for high-stakes financial modeling). Don't like it, cancel your plan. | | |
| ▲ | grayhatter 3 minutes ago | parent | next [-] | | > but hardly illegal or forbidden any more than any other service restriction Intentionally (or negligently) anti-competitive behavior is illegal in the US. > Don't like it, cancel your plan. Don't like being abused by a company? Just pretend it's not happening! Anyone else exactly as smart as you were, they deserve to be cheated out of their money too! | |
| ▲ | rv64imafdc an hour ago | parent | prev | next [-] | | > is a harsh line But that's the thing -- there is no line! Where is this specified? How can we know what service restrictions there are? For all I know, my plan could be exhausted at any point during the workday just because I happened to touch on some keyword Anthropic has decided to ban. > Don't like it, cancel your plan. Ah, but I thought these models were supposed to have been trained for the sake of humanity? That the arbitrary enclosure of the collective intelligence was for our own good? These concepts are not compatible. | | |
| ▲ | vel0city an hour ago | parent | next [-] | | > I thought these models were supposed to have been trained for the sake of humanity? Tbh blocking OpenClaw might just be for the betterment of humanity. It's yet to be proven either way. | |
| ▲ | gjsman-1000 an hour ago | parent | prev [-] | | When you signed up, you agreed you understood the line - which is whatever Anthropic decides the line is. Legally, the line hasn't changed at all, nor has your moral position relative to Anthropic. Don't like it, cancel, but it was always the deal. This is, by the way, the same legal principle that the website you are posting on, right now, uses. Some uses are prohibited. Not every line need be explicit. You aren't allowed to smack talk Y Combinator or the moderators without possibly being banned for life, and you certainly do not have a legal case if they do. | | |
| ▲ | echoangle 10 minutes ago | parent | next [-] | | If you’re paying for it, they can’t just arbitrarily deny you service for made up reasons. I would cancel, but then I would also charge back my payment I’m not getting my promised service for. | | | |
| ▲ | StilesCrisis 39 minutes ago | parent | prev [-] | | Do you think businesses are allowed to just take your money, laugh, and refuse service for no reason? People spend large sums of money for this tool. They can't just delete your balance because they feel like it. | | |
| ▲ | bachmeier 17 minutes ago | parent [-] | | > Do you think businesses are allowed to just take your money, laugh, and refuse service for no reason? > People spend large sums of money for this tool. They can't just delete your balance because they feel like it. Unfortunately, in the US, they can. I'm not a lawyer working in this area, but my understanding is that companies are in general free to stop doing business with any customer at any time (other than reasons like the race of the customer). And in this type of transaction, there is no obligation to give a refund when they cut off the business relationship. This is different from a business-to-business contract or other types of contracts. This type of sale you're generally out of luck if the business cuts you off. That's why Amazon can delete the music library they sold you and give you no compensation. | | |
| ▲ | echoangle 8 minutes ago | parent [-] | | They can not prolong the contract but obviously they still have to provide the service you already paid for. Imagine paying for 1 year of Netflix and one week later Netflix decides to cut you off. Does that make sense? |
|
|
|
| |
| ▲ | Dylan16807 6 minutes ago | parent | prev | next [-] | | There's a lot of people making tools for coding with LLMs and those have a high chance of mentioning OpenClaw somewhere. | |
| ▲ | macNchz an hour ago | parent | prev | next [-] | | There are plenty of ways you could wind up with a git commit containing "OpenClaw" despite zero interaction with OpenClaw itself...adding a blog post to a static site repo, or even a clause in your own app's ToS disallowing use of OpenClaw with your API. | | | |
| ▲ | skywhopper 4 minutes ago | parent | prev [-] | | Where is this restriction documented? |
|
| |
| ▲ | nickthegreek an hour ago | parent | prev | next [-] | | And the stealing of $200 here? More non malice? https://github.com/anthropics/claude-code/issues/53262#issue... | | | |
| ▲ | Jcampuzano2 an hour ago | parent | prev | next [-] | | This would have been easy to say if it was the first time it or something similar happened. But there is a clear pattern emerging. There's no reason to turn down the heat when a company of this size and influence is allowed this level of absurdity time and time again. | |
| ▲ | skywhopper 7 minutes ago | parent | prev | next [-] | | Nah, however this was implemented this was a clear and obvious probable side effect. If they want to block the access at the mention of openclaw, that’s silly but mostly harmless, but why charge extra for an ambiguous case? At best that’s incredibly lazy, which for a company with as much money, influence, and power as Anthropic, is equivalent to malice. | |
| ▲ | verdverm 9 minutes ago | parent | prev | next [-] | | This is not the first, nor likely last, of behavior like this. My personal story is that I bought $50 of credit into their system, didn't use it all that much, and then after a year had gone by they kept the leftovers. I consider that a kind of theft. | |
| ▲ | NetOpWibby an hour ago | parent | prev | next [-] | | Nuance? Ignorance vs malice? You think too highly of folks. | |
| ▲ | teiferer 37 minutes ago | parent | prev | next [-] | | Well this regex nonsense was likely vibe coded. If it escaped quality checks then this is a testament to how dangerous things coming out of Anthropic are, but not in the scifi sense that their CEO tries to make everybody believe. | |
| ▲ | an hour ago | parent | prev | next [-] | | [deleted] | |
| ▲ | surgical_fire an hour ago | parent | prev [-] | | How about no? Why should we coddle a corporations when they screw over customers? It matters very little if they did this out of incompetence or malice. |
|
|
|
| ▲ | petercooper an hour ago | parent | prev | next [-] |
| I wonder if projects which are anti-AI might start smuggling in such identifiers into docs or commits as a way to sabotage people using Claude Code. Your project isn't going to get many AI PRs if just cloning your project wiped out their quota. |
| |
|
| ▲ | isoprophlex an hour ago | parent | prev | next [-] |
| Think they turned it off, or it's not always active. I can't reproduce it myself. |
| |
|
| ▲ | margalabargala 36 minutes ago | parent | prev | next [-] |
| This partially reproduced for me. I did not see my session use go to 100%. I did however get: > API Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"You're out of extra usage. Add more at claude.ai/settings/usage and keep going."},"request_id":"redacted"} |
|
| ▲ | 2 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | rich_sasha 2 hours ago | parent | prev | next [-] |
| That's rather shitty. It's one thing to disallow bypassing preferential pricing models, it's a completely different thing to castrate your model against some uses. You can see how it goes in the future. Wanna vibe code a throwaway script? $0.20. Ah, it's for a legal document search? $10k then. Oh and we'll charge 20% of your app sales too - I can see how they are going in real time, mind you! |
| |
| ▲ | throwaway277432 an hour ago | parent | next [-] | | Unironically yes. I predict that costs will grow to 80% of what it would cost a human, across the board for everything AI can do. "It's still cheaper than a human" they'll say. Loudly here on HN too. Of course this will happen slowly, very slowly. Lets meet again in 10-20 years. | | |
| ▲ | revolvingthrow an hour ago | parent | next [-] | | If openai / anthropic / google were the only game in town then yea, we’d already be paying 5x as much as we do. But local models are so close to sota that it just isn’t going to happen. If I’m a lawyer getting billed $500k/yr on $600k profit I’d rather buy a chonky server and run a model that’s 90% as good and get my money back in 2 years, then pay $5k electricity on $600k profit. Nobody will successfully lobby for banning local models either, it just isn’t going to happen when the rest of the world will happily avoid paying 80% of their profits to some US bigco for the privilege of existing. | |
| ▲ | KronisLV an hour ago | parent | prev | next [-] | | > "It's still cheaper than a human" they'll say. The question is how much friction there will be for people to switch over to Gemini, GPT or maybe even DeepSeek or Mistral or whatever. Even if price hikes are inevitable across the board, the moat any single org has is somewhat limited, so prices definitely will be a factor they'll compete on with one another at least a bit. | | |
| ▲ | RussianCow an hour ago | parent [-] | | > the moat any single org has is somewhat limited I disagree. The models are going to become commodities (we're already almost there), but the tooling and integrations will be the moat. Reproducing everything Anthropic has already built with Claude Code, Cowork, and all their connectors would be nontrivial, and they're just getting started. Anyone can implement an AI chatbot. But few will be able to provide AI that's deeply integrated into our daily lives. | | |
| ▲ | KronisLV 9 minutes ago | parent [-] | | > Reproducing everything Anthropic has already built with Claude Code, Cowork, and all their connectors would be nontrivial, and they're just getting started. They're one org with presumably some specific direction. As the actual models get better, expect a large part of the dev community iterating on tools way more easily, sometimes ones that Anthropic doesn't quite have an equivalent to - for example, just recently Cline released their Kanban solution to dish out tasks to agents (https://cline.bot/kanban), OpenCode has been around for a while for the agentic stuff (https://opencode.ai/) and now has a desktop and web version as well, alongside dozens of others. Cline and KiloCode also have decent browser automation. I will admit that everyone working on everything at the same time definitely means limitless reinvention of the wheel and some genuinely good initiatives dying off along the way (I personally liked RooCode more than both the Cline and KiloCode for Visual Studio Code, sad to see them go), but I doubt we're gonna see a lack of software. Maybe a lack of good software, though; not like Anthropic or any org has any moat there either, since they're under the additional pressure of having to do a shitload of PR and release new models and keep up appearances, compared to your average dev just pushing to GitHub (unless they want corporate money, in which case they do need some polish). |
|
| |
| ▲ | vidarh 41 minutes ago | parent | prev | next [-] | | Kimi and GLM 5.1 are already capable of handling a good chunk of my tasks. They about to lose the leverage to allow them to drastically increase prices - enough models are 6-12 months away from being good enough large proportions of their customers uses. | |
| ▲ | pingou an hour ago | parent | prev | next [-] | | This is assuming there will be no competition. But why wouldn't there be? Especially since you can use open source models, which are not too far from frontier models (from now). | |
| ▲ | mystraline an hour ago | parent | prev [-] | | Its not20 years. Its now. Nvidia has already said that tokens cost more than humans. https://finance.yahoo.com/sectors/technology/articles/cost-c... |
| |
| ▲ | andai an hour ago | parent | prev | next [-] | | So like taxes except they actually help you survive? | |
| ▲ | 2ndorderthought an hour ago | parent | prev | next [-] | | I'm not a lawyer but is this legal? It's extremely anticompetitive. | | |
| ▲ | bdangubic an hour ago | parent [-] | | what is illegal about it?! their product, they can do whatever they want and you can choose to be a customer or not, no? | | |
| ▲ | 2ndorderthought an hour ago | parent [-] | | They are technically billing people for services not rendered without any disclaimer? | | |
| ▲ | duped an hour ago | parent [-] | | Price discrimination for services is mostly legal | | |
| ▲ | in_cahoots 40 minutes ago | parent [-] | | Imagine if it were Comcast instead of Claude. Comcast gives you 750GB of data a month. Now they decide that visiting HN 'counts' as 750GB and either shut you off or bill you extra. Is that price discrimination or changing the terms after the fact? | | |
| ▲ | duped 8 minutes ago | parent [-] | | Depends. Comcast is able to charge you and a business for the same service at different rates. They have also tried to do exactly what you're talking about, where they bill differently based on the data being accessed (remember net neutrality?). But that's a bad example, price discrimination for commodities is generally not legal, while discrimination for services is. Data is arguably a commodity (ianal, I'm not up to date on the law of this). "Tokens" are not. In fact the law makes carve outs specifically for businesses that sell services to discriminate on price based exactly on how the service is used and by who. And they do it all the time. Whether it's fair or not, up to you to decide as a consumer. If you don't like it don't pay for it. |
|
|
|
|
| |
| ▲ | dangus 2 hours ago | parent | prev [-] | | This is absolutely how it’s going work. AI loses way too much money to not be enshittified. It’s a way less transformational technology when put in context of the real price tag. | | |
| ▲ | rapind an hour ago | parent | next [-] | | No chance unless open weight models out of China discontinue. The gap right now is practically nonexistent. | | |
| ▲ | delusional an hour ago | parent [-] | | When the consolidation phase starts, you bet your ass open weight models are going to stop. | | |
| ▲ | mitchitized an hour ago | parent [-] | | I don't think consolidation will ever happen, the AI space is already dominated by a few whales. Seems most of the open weight models are from outside the USA (shocker), going to be interesting to see how THAT shakes out. |
|
| |
| ▲ | bugglebeetle an hour ago | parent | prev | next [-] | | Deepseek has demonstrated that there is no reason for it to actually lose money. The awful business practices and monopoly tactics of the frontier model labs in the US are the problem. | |
| ▲ | delusional an hour ago | parent | prev [-] | | I mean obviously. Why would the companies that control this technology NOT charge the absolute maximum amount their customers are willing to pay? This doesn't even have anything to do with if it loses money or not. Obviously they are going to charge as much as possible. |
|
|
|
| ▲ | mystraline an hour ago | parent | prev [-] |
| Its not Claude Code. Its "Fraud Code". All of this is just criminal and fraudulent behavior, done July a whole bunch of people who haven't learned their lesson, and keep sending Anthropic more money for abuse at scale. |
| |
| ▲ | gjsman-1000 an hour ago | parent | next [-] | | There is literally nothing close to illegal about this behavior. You read the terms of service right, which provides a long list of explicit and implicit disclaimers? | | |
| ▲ | nickthegreek 28 minutes ago | parent | next [-] | | What action did the user take that was against the TOS? | | |
| ▲ | margalabargala 24 minutes ago | parent [-] | | You misunderstand. The user didn't take an action that was "against the TOS". The TOS simply allows Anthropic to decline to fulfill a request at any time for any reason. | | |
| ▲ | schubidubiduba 11 minutes ago | parent [-] | | TOS are not laws. They often conflict with actual laws, and are then void. So you can't just say "It's in the TOS", you do have to look at actual laws and whether they may be violated (Because it is anticompetitive or whatever else) |
|
| |
| ▲ | cyanydeez an hour ago | parent | prev | next [-] | | So, in America, just because it's written in a contract does not mean it's enforceable in anyway. I can make you sign a infinitely generating contract, that doesn't mean it's enforceable/ | | |
| ▲ | vel0city an hour ago | parent | next [-] | | > just because it's written in a contract does not mean it's enforceable in anyway And we continue slipping into lawlessness and a low trust society... | |
| ▲ | gjsman-1000 an hour ago | parent | prev [-] | | > So, in America, just because it's written in a contract does not mean it's enforceable in anyway. But the presumption, as any court will show, is that it is fully blooming enforceable. The burden of proof is on showing it isn't. This particular instance, a lawyer would laugh at you in the face over, this is absolutely 100% stone cold enforceable common and expected. How do you expect Facebook or HN to moderate if certain uses aren't prohibited? The same principle applies. HN bans certain phrases, lots of them. | | |
| |
| ▲ | Tadpole9181 37 minutes ago | parent | prev [-] | | If I have a terms of service for my SaaS where I've snuck in a vague term that I can "charge additional usage fees at my discretion", it doesn't mean I get to actually charge you $100,000 because I found out your favorite color is blue. There's absolutely an expectation of reasonability and good faith. Nobody signing up for Claude would be reasonably assuming that they are allowed to arbitrarily decide what magic words suddenly bypass the subscription cost model that was actually purchased into an overcharge model that is significantly more expensive, whose verbiage clearly indicates the intent of the feature being enabled is to allow additional use after the quota has been consumed, not randomly at the behest of Anthropic. |
| |
| ▲ | insane_dreamer an hour ago | parent | prev [-] | | It's in the TOS, so no, not fraud. You might not like it that Anthropic doesn't want you running OpenClaw (effectively owned by a competitor) on CC, but that doesn't make it fraudulent or criminal. | | |
| ▲ | nickthegreek 27 minutes ago | parent | next [-] | | The user did not do anything against the TOS. This isnt about running OpenClaw, its about having the words OpenClaw present in a file. | |
| ▲ | rohansood15 an hour ago | parent | prev | next [-] | | TOS is not an impenetrable immunity shield. | |
| ▲ | jknoepfler an hour ago | parent | prev [-] | | Isn't this precisely the pattern of behavior that gets you sued for anti-competitive practices? | | |
| ▲ | theshrike79 37 minutes ago | parent | next [-] | | This is exactly the same what Google does when it tries to prevent alternative Youtube clients by fiddling with the page design on purpose. Nobody is claiming anticompetitive there | |
| ▲ | gjsman-1000 42 minutes ago | parent | prev [-] | | What? Seriously, not at all. Anti-competitive practices is when you go out of your way to use legal agreements or practices, in an illegal way (i.e. from the starting point of a monopoly), to deliberately restrict the ability to use competition. Openclaw is not a competitor with Claude. Anti-competitive practices would only occur here if Anthropic used some technique to prevent people from using Claude alternatives (i.e. if you install Claude Code, all other AI agents are forcibly disabled on your system). |
|
|
|