Remix.run Logo
dom96 3 days ago

> if I give you $15B, you will probably make a lot more than $15B with it

"probably" is the key word here, this feels like a ponzi scheme to me. What happens when the next model isn't a big enough jump over the last one to repay the investment?

It seems like this already happened with GPT-5. They've hit a wall, so how can they be confident enough to invest ever more money into this?

bcrosby95 3 days ago | parent [-]

I think you're really bending over backwards to make this company seem non viable.

If model training has truly turned out to be profitable at the end of each cycle, then this company is going to make money hand over fist, and investing money to out compete the competition is the right thing to do.

Most mega corps started out wildly unprofitable due to investing into the core business... until they aren't. It's almost as if people forget the days of Facebook being seen as continually unprofitable. This is how basically all huge tech companies you know today started.

serf 2 days ago | parent | next [-]

>I think you're really bending over backwards to make this company seem non viable.

Having experienced Anthropic as a customer, I have a hard time thinking that their inevitable failure (something i'd bet on) will be model/capability-based, that's how bad they suck at every other customer-facing metric.

You think Amazon is frustrating to deal with? Get into a CSR-chat-loop with an uncaring LLM followed up on by an uncaring CSR.

My minimum response time with their customer service is 14 days -- 2 weeks -- while paying 200usd a month.

An LLM could be 'The Great Kreskin' and I would still try to avoid paying for that level of abuse.

sbarre 2 days ago | parent | next [-]

Maybe you don't want to share, but I'm scratching my head trying to think of something I would need to talk to Anthropic's customer service about that would be urgent and un-straightfoward enough to frustrate me to the point of using the term "abuse"..

babelfish 2 days ago | parent [-]

Particularly since they seem to be complaining about service as a consumer, rather than an enterprise...

2 days ago | parent [-]
[deleted]
StephenHerlihyy 2 days ago | parent | prev [-]

What's fun is that I have had Anthropic's AI support give me blatantly false information. It tried to tell me that I could get a full year's worth of Claude Max for only $200 dollars. When I asked if that was true it quickly backtracked and acknowledged it's mistake. I figure someone more litigious will eventually try to capitalize.

nielsbot 2 days ago | parent [-]

"Air Canada must honor refund policy invented by airline’s chatbot"

https://arstechnica.com/tech-policy/2024/02/air-canada-must-...

ricardobayes 2 days ago | parent | prev | next [-]

It's an interesting case. IMO LLMs are not a product in the classical sense, companies like Anthropic are basically doing "basic research" so others can build products on top of it. Perhaps Anthropic will charge a royalty on the API usage. I personally don't think you can earn billions selling $500 subscriptions. This has been shown by the SaaS industry. But it is yet to be seen whether the wider industry will accept such royalty model. It would be akin to Kodak charging filmmakers based on the success of the movie. Somehow AI companies will need to build a monetization pipeline that will earn them a small amount of money "with every gulp", if we are using a soft drink analogy.

Barbing 2 days ago | parent | prev [-]

Thoughts on Ed Zitron’s pessimism?

“There Is No AI Revolution” - Feb ‘25:

https://www.wheresyoured.at/wheres-the-money/

reissbaker 21 hours ago | parent [-]

Ed Zitron plainly has no idea what he's talking about. For example:

Putting aside the hype and bluster, OpenAI — as with all generative AI model developers — loses money on every single prompt and output. Its products do not scale like traditional software, in that the more users it gets, the more expensive its services are to run because its models are so compute-intensive.

While OpenAI's numbers aren't public, this seems very unlikely. Given open-source models can be profitably run for cents per million input tokens at FP8 — and OpenAI is already training (and thus certainly running) in FP4 — even if the closed-source models are many times bigger than the largest open-source models, OpenAI is still making money hand over fist on inference. The GPT-5 API costs $1.25/million input tokens: that's a lot more than it takes in compute to run it. And unless you're using the API, it's incredibly unlikely you're burning through millions of tokens in a week... And yet, subscribers to the chat UI are paying $20/month (at minimum!), which is much higher than a few million tokens a week cost.

Ed Zitron repeats his claim many, many, excruciatingly many times throughout the article, and it seems quite central to the point he's trying to make. But he's wrong, and wrong enough that I think you should doubt that he knows much about what he's talking about.

(His entire blog seems to be a series of anti-tech screeds, so in general I'm pretty dubious he has deep insight into much of anything in the industry. But he quite obviously doesn't know about the economics of LLM inference.)

Barbing 14 hours ago | parent [-]

Thank you for your analysis!