| ▲ | applfanboysbgon 2 hours ago |
| There is nothing wrong with flat-rate plans. I work at an LLM-serving startup, and am aware of at least three competitors, that (a) provide flat rate subs (b) are extremely profitable and (c) are bootstrapped, ie. not beholden to investors (there are also many other competitors but I can't ascertain their profitability or investment status). You simply need to price the flat-rate sub at a price that's profitable when averaged out over all of your users, both light and heavy, and prevent fully automated usage by the power users. That's it. This is immensely more user-friendly, and I doubt you'd get any traction at all if you didn't do this. Even if you pay more for the sub, having unlimited (non-automated) usage frees a mental barrier to using the product. If you have to pay for every request you make, it introduces a hesitation to do anything - it makes the user hesitant to experiment, hesitant to prompt for anything of slightly less significance, anxious about the exact token consumption of every prompt, and so on. It's not enjoyable to use when you're being penny pinched for every prompt. Anthropic's problem, of course, is that they are not bootstrapped. They don't have a business model that can compete with startups running DeepSeek or GLM on their own hardware. Non-frontier startups got to skip the whole "tens of billions of dollars in debt" step of creating a frontier model from scratch, and still get to run a model that is perhaps 80%-85% as good as Anthropic's, which is good enough for millions of customers. So Anthropic is desperate, backed into a corner, and doing anything and everything they can to try to right their sinking ship, no matter how scummy. |
|
| ▲ | fwipsy 2 hours ago | parent | next [-] |
| Anthropic isn't backed into a corner. They have plenty of enterprise subscriptions. Individual user experience (especially billing) is suffering because it's not a priority in comparison. If they were as desperate as you described, they would try selling access to mythos. |
| |
| ▲ | applfanboysbgon 2 hours ago | parent [-] | | The fact that they are adding code specifically to charge individual consumers more reeks of desperation. This isn't "individual users are suffering because they're lower priority and neglected", this is "individual users are being actively squeezed because Anthropic is desperate for every penny it can get". | | |
| ▲ | fwipsy 2 hours ago | parent [-] | | This is such a stupid way to charge customers more. How many Claude code users use OpenClaw? Cheating customers is like burning down your house to keep warm. Anthropic aren't that stupid. I guarantee that this was some half-baked vibe-coded anti abuse system. |
|
|
|
| ▲ | vintermann an hour ago | parent | prev | next [-] |
| > prevent fully automated usage by the power users. But being a power user and fully automating things is the whole appeal. |
|
| ▲ | pkulak 2 hours ago | parent | prev | next [-] |
| I also assume that forcing usage to spread out, via those 5-hour windows, has cost advantages. |
|
| ▲ | Oras 2 hours ago | parent | prev | next [-] |
| LLM serving startup => bootstrapped => extremely profitable Mind sharing a link? |
| |
| ▲ | applfanboysbgon 2 hours ago | parent [-] | | I do mind, since I enjoy speaking freely without concern of my opinions being linked to my employment. I assure you companies like this exist. Profiting off of inference is not the hard part, it's frontier training that is prohibitively expensive. You're free to disregard my commentary if you want, of course. | | |
| ▲ | beepbooptheory 2 hours ago | parent | next [-] | | Why not just name one of those three competitors? | |
| ▲ | simoncion 2 hours ago | parent | prev [-] | | > Profiting off of inference is not the hard part, it's frontier training that is prohibitively expensive. And given that Anthropic does both, it must make up its training costs by selling inference. jp57 was pretty clearly talking about Anthropic's flat-rate plans, rather than the flat-rate plans of companies that get to skip the most expensive part of the process. | | |
| ▲ | applfanboysbgon 2 hours ago | parent [-] | | I understand that very well, yes. The point I'm making is that I don't think Anthropic or OpenAI would have ever gotten significant traction if they didn't have flat-rate plans, because flat-rate plans themselves are not inherently predatory or part of the enshittification slope but actually extremely UX-friendly. Perhaps in another timeline, if their product was actually valuable enough to pay this price for, they could have simply provided a $50 plan as the standard level to provide enough margin to account for training costs as well. But as I see it DeepSeek is an existential threat to them, and they are now stuck between a rock and a hard place, because their product is devalued by its existence and if the frontier labs were to gate access with $50 plans they would get their lunch eaten even more quickly. It turns out there are downsides to burning inconceivably large stacks of other people's money. | | |
| ▲ | simoncion an hour ago | parent [-] | | > The point I'm making is that I don't think Anthropic or OpenAI would have ever gotten significant traction if they didn't have flat-rate plans... That seems likely. If people had to pay their share of the actual all-in cost of the service (rather than having it be subsidized by investors with extremely deep pockets and a small handful of corporate customers), very, very few regular people would use it. The point that 'jp57' pretty explicitly made [0] is that flat-rate plans that don't cover the all-in cost of providing the plans tend to result in those plans getting worse and worse and worse, as economic realities assert themselves. If the flat-rate plans that you are aware of actually cover the cost of providing the service, then you're discussing an entirely different situation that's entirely inapplicable to the discussion about Anthropic's pricing and degrading level of service. [0] ...which is one that's understood by people who have been in pretty much any industry for more than a few years... | | |
| ▲ | applfanboysbgon an hour ago | parent [-] | | The crux of my argument is that there is a timeline where people would've paid the all-in cost of the service, with margin, as a flat-rate sub. The $20 rate was not sustainable when factoring in training costs but if not for DeepSeek they could have simply raised the prices rather than gestures broadly whatever the fuck is going on at Anthropic now, with a new PR fumble every three days. If the Chinese models didn't exist, people would've groaned but would likely still pay $40 or $50 for an LLM subscription. You misdirected my quoted statement to assert a position I did not take. When I talk about flat-rate subs being a good UX, I am not talking about at a subsidized rate. My position is that people will pay more for a flat-rate sub than they are willing to through per-token billing. That is, a consumer who would only pay average $10/mo if they used the API will voluntarily pay $20/mo for a sub, because even though it's a worse value the latter is a tremendously more friendly user experience. When I say that flat-rate subs are necessary for traction, I mean that solely from a user experience perspective, not "subsidized usage is necessary for traction". | | |
| ▲ | skydhash 7 minutes ago | parent [-] | | There’s also the “prepaid” alternative. Especially if you’re skittish about budgets. You topup you account for $10, and when you overflow (maybe by setting an alert to around $8), you can add an extra 5$ to make it to the end without interruption. |
|
|
|
|
|
|
|
| ▲ | bdangubic an hour ago | parent | prev [-] |
| > prevent fully automated usage by the power users this is a non-starter |
| |