Remix.run Logo
intended 2 days ago

> afaik most estimate north of 80% profit margins

This seems to be the lynchpin of your argument.

It makes me wonder if I have been living under a rock, because I have never heard of frontier labs making money. AFAIK all AI firms are simply burning money to acquire customers at this stage. Is this wrong?

asdfasgasdgasdg 2 days ago | parent | next [-]

>It makes me wonder if I have been living under a rock, because I have never heard of frontier labs making money.

You're confusing the profit from the marginal token and overall profit (basically gross margin and operating margin). The comment you're replying to is calculating that AI labs are probably making a substantial profit per paid token. It's just that so far that profit has not been able to overcome the ongoing R&D and capex costs.

kgwgk 2 days ago | parent [-]

> not been able to overcome the ongoing R&D and capex costs.

And the cost of not-quite-paid tokens.

margalabargala 2 days ago | parent [-]

Which may or may not exist, hence this thread.

kgwgk 16 hours ago | parent [-]

Non-paid tokens do definitely exist and they weren’t included in the remark about “substantial profit per paid token”. Underpaid/subsidized tokens also exist which don’t provide “substantial profit”.

margalabargala 16 hours ago | parent [-]

Are you talking about free promo tokens the company gives out, or are you implying that subscription tokens are sufficiently subsidized so as to be below cost?

pmdr 2 days ago | parent | prev | next [-]

People tend to believe OpenAI and Anthropic can make money any time, the only thing they need to do is to stop training newer/better models. Source? Sam & Dario, of course (trust us, bro). It may (if they sell access at API price) or may not be true, but the scenario where training is stopped is simply unrealistic at this point.

dgellow 2 days ago | parent | prev [-]

I’m not exactly sure of the details but I believe they do make _some_ money on inference. But they then have to reinvest it all into training of the next model to stay competitive. So even if inference is positive (I’m seeing inconsistent reported data if that’s the case or not), it is directly spent.

I do not understand how the companies can end up in positive, unless something fundamental changes