Remix.run Logo
fluidcruft 4 days ago

But aren't we only worth something like $300/year each to Meta in terms of ads? I remember someone arguing something like that when the TikTok ban was being passed into law... essentially the argument was that TikTok was "dumping" engagement at far below market value (at something like $60/year) to damage American companies. That was something the argument I remember anyway.

majormajor 4 days ago | parent | next [-]

Here is some old analysis I remember seeing at the time of Hulu ads vs no-ads plans: https://ampereanalysis.com/insight/hulus-price-drop-is-a-wis...

They dropped the price $2/mo on their with-ads plan to make a bigger gap between the no-ads plan and the ads plan, and the analyst here looks at their reported ad revenue and user numbers to estimate $12/mo per user from ads.

Whether Meta across all their properties does more than $144/yr in ads is an open question; long-form video ads are sold at a premium but Facebook/IG users see a LOT of ads across a lot of Meta platforms. The biggest advantage in ad-$-per-user Hulu has is that it's US-only. ChatGPT would also likely be considered premium ad inventory, though they'd have a delicate dance there around keeping that inventory high-value, and selling enough ads to make it worthwhile, without pissing users off too much.

Here they estimate a much lower number for ad revenue per Meta user, like $45 bucks a year - https://www.statista.com/statistics/234056/facebooks-average... - but that's probably driven disproportionately by wealth users in the US and similar countries compared to the long tail of global users.

One problem for LLM companies compared to media companies is that the marginal cost of offering the product to additional users is quite a bit higher. So business models, ads-or-subscription, will be interesting to watch from a global POV there.

One wonders what the monetization plan for the "writing code with an LLM using OSS libraries and not interested in paying for enterprise licenses and such" crowd will be. What sort of ads can you pull off in those conversations?

cj 4 days ago | parent | prev [-]

If that’s the case, we have an even bigger problem on our hands. How will these companies ever be profitable?

If we’re already paying $20/mo and they’re operating at a loss, what’s the next move (assuming we’re only worth an extra $300/yr with ads?)

The math doesn’t add up, unless we stop training new models and degrade the ones currently in production, or have some compute breakthrough that makes hardware + operating costs an order of magnitudes cheaper.

rrrrrrrrrrrryan 4 days ago | parent | next [-]

OpenAI has already started degrading their $20/month tier by automatically routing most of the requests to the lightest free-tier models.

We're very clearly heading toward a future where there will be a heavily ad-supported free tier, a cheaper (~$20/month) consumer tier with no ads or very few ads, and a business tier ($200-$1000/month) that can actually access state of the art models.

Like Spotify, the free tier will operate at a loss and act as a marketing funnel to the consumer tier, the consumer tier will operate at a narrow profit, and the business tier for the best models will have wide profit margins.

lodovic 4 days ago | parent | next [-]

I find that hard to believe. As long as we have open weight models, people will have an alternative to these subscriptions. For $200 a month it is cheaper to buy a GPU with lots of memory or rent a private H200. No ads and no spying. At this point the subscriptions are mainly about the agent functionality and not so much the knowledge in the models themselves.

lupusreal 4 days ago | parent | next [-]

I think what you're missing here is most OpenAI users aren't technical in the slightest. They have massive and growing adoption from the general public. The general public buy services, not roll their own for free, and they even prefer to buy service from the brand they know over getting cheaper service from somebody else.

BigGreenJorts 4 days ago | parent [-]

The conclusion I got from their comment was that the highest margin tier (the business customers) would be incentivized to build their own service instead of paying the subscription. Of course, I am doubtful that for the vast majority of businesses this viable/at all more cost effective when a service AWS is highly popular and extremely profitable.

HotHotLava 4 days ago | parent | prev [-]

H200 rental prices currently start at $2.35 per hour, or $1700 per month. Even if you just rent for 4h a day, the $200 subscription is still quite a bit cheaper. And I'm not even sure that the highest-quality open models run on a single H200.

willcannings 4 days ago | parent | prev [-]

Most? Almost all my requests to the "Auto" model end up being routed to a "thinking" model, even those I think ChatGPT would be able to answer fine without extra reasoning time. Never say never, but right now the router doesn't seem to be optimising for cost (at least for me), it really does seem to be selecting a model based on the question itself.

furyofantares 4 days ago | parent | prev | next [-]

> If we’re already paying $20/mo and they’re operating at a loss

I'm quite confident they're not operating at a loss on those subscriptions.

swiftcoder 4 days ago | parent [-]

They are running at a massive loss overall - feels pretty safe to assume that they wouldn't be if their cheapest subscription tier was breaking even

furyofantares 3 days ago | parent | next [-]

Their cheapest tier is free, they lose money on that of course. And spend a lot of money training new models.

Anthropic has said they have made money on every model so far, just not enough to train the next model, which so far has been much more costly to train every generation. At some point they will probably train an unprofitable model if training costs keep rising dramatically.

OpenAI burns more money on their free tier and might be spending more money building out for future training (I don't know if they do or not) but they both make money on their $20 subscriptions for sure. Inference is very cheap.

wtbdbrrr 4 days ago | parent | prev [-]

nonsense for the public. they are Amazon, basically. they take the loss so the overall ecosystem ( x'D like with crypto ) can gain massively, onboard all kinds of target noobs, sry, groups, brutally prime users, discourage as many non-AI processes as possible and steer all industries towards replacing even those processes with AI that are not worth being replaced with AI, like writing and art.

of course there are a lot valuable use cases. irrelevant in the context, though.

the productivity boosts in the creative industries will additionally lower the standards and split the public even further, ensuring that if you want quality, you have to fuck over as many people as possible, so that you can afford quality ( and an ad-free life, of course. if you want a peaceful peripheral, pay up. it's extortion 404, 101 - 303 already successfully implemented on social media, TV and the radio ).

they don't lose. they make TONS OF FAKE MONEY everywhere in the, again, cough,

"ecosystem".

It's important to understand the Amazon part. The amount of damaging mechanisms that platform anchored in workers, jobbers, business people and consumers is brutal.

All those mechanisms converge in more, easy money and a quicker deterioration of local environments, leading to worse health and more business opportunities that aim at mitigating damage; almost entirely in vain, of course, because the worst is accelerating much quicker; it's easier money.

At the same time peoples psychology is primed for bad business practices, literally making people dumber and lowering their standards to make them easier targets. Don't look at the bottom to see this, look at the upper middle class and above.

It's a massive net loss for civilization and humanity. A brutal net negative impact overall.

madkangas 3 days ago | parent [-]

Thank you for writing this. Your point about "quicker deterioration of local environments" is thought-provoking.

My key technical complaint about LLMs to date is the general inability to add substantial local context. How can I make it understand my business, my processes, my approach to the market? Can I retrain it? Or make it understand my data warehouse?

I think you are explaining why LLM providers don't care about solving my concerns, generally speaking. This is sobering.

fluidcruft 4 days ago | parent | prev [-]

Well to make things worse I was pretty convinced those were faked numbers to push the TilTok ban forward. I really doubt Meta and Google are each taking in this much per user. But my point is more that even if it were that high,

ChatGPT isn't going to capture all the engagement. And even then I don't know whether $300 is much particularly after subtracting operating overhead. I'm just saying I have trouble believing there's gold to be had at the end of this LLM ad rainbow. People just seem to throw out ideas like "ads!" as if it's a sure fire winning lottery ticket or something.

Geezus_42 4 days ago | parent [-]

Everything devolves into ADs eventually. Why would productized LLMs be any different?

fluidcruft 3 days ago | parent [-]

I didn't say they wouldn't, I'm more skeptical about whether it's a sustainable business model. I mean sure gas stations and airports have ads, but nobody gives you gas or airfare in exchange for watching ads. It's a fraction of the revenue needed.

My point is that someone starting an airline can't get away with hopes and dreams about making bank on ads.