Remix.run Logo
BosunoB 2 days ago

All subscription models are subsidized by users who don't use much. The fact that somebody on a $20 sub might get $50 in value isn't crazy if there are 3 people who only get $10 in value. This isn't some sign that the model is broken, it's the intended outcome.

Also, I didn't read this whole thing, but I have yet to see Zitron respond to the strongest AI financials claim, which is that the models themselves are profitable on a life-cycle basis, even if the companies are not profitable on an annual basis due to capital expenditure. Dario made this claim exactly, and it more or less blows all of Zitron's financials arguments up.

weakfish 2 days ago | parent | next [-]

> but I have yet to see Zitron respond to the strongest AI financials claim

He does in this [0] article.

[0] https://www.wheresyoured.at/ai-is-really-weird/

BosunoB a day ago | parent [-]

Thanks for the link. I'll admit I'm not an expert on the business side of this, but is this really much of a response? He seems to just call it strange accounting and then he moves on.

It doesn't even feel like particularly strange accounting to me. Aren't there plenty of companies that spend a lot in one year and realize the gains in the next year? If I build a house this year and sell it next year, the house was still profitable, even if next year I'm building 3 more houses to sell in the year after.

mrkeen 2 days ago | parent | prev | next [-]

I subscribed to Claude for a month. I sat down with it for a few sessions, but in each case I ran into a limit before I achieved anything worthwhile. And that was with me babysitting it the whole time to try to get the most out of it. I'm not sure it's possible to use it less (so that others can use it more) and get anything meaningful done.

BosunoB a day ago | parent [-]

Most small features take 80-150k tokens to implement, and most large features take 200-250k. For a hobbiest working like 10 hours a week, they can get stuff done but not nearly hit the weekly usage cap.

csande17 2 days ago | parent | prev | next [-]

Zitron has responded to that claim here: https://www.wheresyoured.at/ai-is-really-weird/#does-anthrop...

The TL;DR is that Dario likes to talk about imaginary/hypothetical companies a lot in interviews, and those companies' financials don't have a direct basis in reality.

BosunoB a day ago | parent [-]

Thanks for the link. There's not much of an argument here from Ed, though, besides that it's an unusual way to view or report margins.

But it's not that unusual, right? If I build a house this year and sell it next year, the house might still be profitable even if next year I'm building 3 more houses, so the company as a whole is still in the red on an annual basis.

I mean, I'm not a financial expert but that doesn't seem all that unusual to me.

csande17 a day ago | parent [-]

The first part of the argument is just noticing that Dario is carefully avoiding making factual claims about Anthropic. Like, if the bank asked you if your construction company was profitable, would it be acceptable to respond: "Well, hypothetically, if a construction company sold houses for more than it cost to build them, that company could be considered profitable. It is possible to imagine a stylized model of a construction company that is theoretically profitable."? If the real, non-hypothetical company that Dario runs has financial results which support this argument, he should probably say them more often.

The second prong of the argument is basically that, when you invest in Anthropic, you can't just invest in one model and then collect the profits from that model. You're investing in a whole company in the hopes that they can be profitable overall; at some point they'll need to stop spending so much money on training and give it back to the investors instead. Zitron argues that this isn't going to happen because training is actually something that companies need to do to retain customers at all. An analogy here might be the fact that Microsoft has to spend a certain amount of "R&D" budget fixing security vulnerabilities in Windows Server just to retain their current customer base; if attackers found out about a serious security hole but Microsoft didn't fix it, everyone would need to stop using Windows Server. LLM companies do the same kind of thing to fix "jailbreaks" and other unexpected model behaviors.

The third prong of the argument is that, in general, there's a long history of companies using creative accounting to try and make themselves look profitable and then collapsing because they're not actually profitable. For example, WeWork's "community-adjusted EBIDTA" figured claimed the company was profitable using very similar arguments to Dario, and then the company went bankrupt. If you're already cooking the numbers, you have almost arbitrary flexibility to report whatever "margins" you want by excluding some of your costs from the calculation.

overrun11 a day ago | parent [-]

> hypothetically, if a construction company sold houses for more than it cost to build them, that company could be considered profitable.

Construction companies capitalize and depreciate over many years so they can answer "yes" they are profitable even when they are very cashflow negative. This is exactly Dario's point: model training costs are treated as expenses but in practice are much closer to construction costs. Model training effectively produces an asset, the model weights, which will generate revenue for many years into the future.

> Zitron argues that this isn't going to happen because training is actually something that companies need to do to retain customers at all.

This is exactly why Dario's point about each training run being profitable is so important. It suggest that this is not true. Customers are happy to use old models long enough to fully pay off their costs.

> there's a long history of companies using creative accounting

Zitron seems to know very little about accounting evidenced by him using terms like "gross margin" wrong in this article. He's pattern matching against his limited exposure to company financials to find superficial similarities between the AI labs and famous frauds. Find me a company that doesn't report non-GAAP measures. Google search claims 96% of SP 500 companies do it. Are they all frauds too? Sometimes non-GAAP adjustments are eye roll inducing but they are tolerated because they can be genuinely useful to get a fuller picture of the business.

CodingJeebus 2 days ago | parent | prev [-]

> which is that the models themselves are profitable on a life-cycle basis, even if the companies are not profitable on an annual basis due to capital expenditure.

Until they file an S1 to go public and show the world the books, take everything they say with a grain of salt. The amount of financial engineering going on in this space is astounding, and I'll believe it when I see an objective 3rd party release an audit confirming this claim.