|
| ▲ | roughly 9 hours ago | parent | next [-] |
| Social networks finding profitability via advertising is what created the entire problem space of social media - the algorithmic timelines, the gaming, the dopamine circus, the depression, everything negative that’s come from social media has come from the revenue model, so yes, I think it’s worth being concerned about how LLMs make money, not because I’m worried they won’t, because I’m worried they Will. |
| |
| ▲ | milesvp 8 hours ago | parent | next [-] | | I think this can't be understated. It also destroyed search. I listened to a podcast a few years ago with an early googler who talked about this very precipice in early google days. They did a lot of testing, and a lot of modeling of people's valuation of search. They figured that the average person got something like $50/yr of value out of search (I can't remember the exact number, I hope I'm not off by an order of magnitude). And that was the most they could ever realistically charge. Meanwhile, advertising for just Q4 was like 10 times the value. It meant that they knew that advertising on the platform was inevitable. They also acknowledged that it would lead to the very problem that Brin and Page wrote about in their seminal paper on search. I see LLMs inevitably leading to the same place. There will undoubtedly be advertising baked into the models. It is too strong a financial incentive. I can only hope that an open source alternative will at least allow for a hobbled version to consume. edit: I think this was the podcast https://freakonomics.com/podcast/is-google-getting-worse/ | | |
| ▲ | SJC_Hacker 4 hours ago | parent [-] | | This is an interesting take - is my "attention" really worth several thousand a year? In that my purchasing decisions being influenced by advertising to that degree that someone is literally paying someone else for my attention ... I wonder if instead, could I sell my "attention" instead of others profitting of it? | | |
| ▲ | lymbo an hour ago | parent [-] | | Yes, but your attention rapidly loses value the more that your subsequent behavior misaligns with the buyer’s desires. In other words, the ability to target unsuspecting, idle minds far exceeds the value of a willing and conscious attention seller. |
|
| |
| ▲ | Centigonal 2 hours ago | parent | prev | next [-] | | oh, I 100% agree with this. The way the social web was monetized is the root of a lot of evil. With AI, we have an opportunity to learn from the past. I think a lesson here is "don't wait to think critically about the societal consequences of the next Big Tech Thing's business model because you have doubts about its profitability or unit economics." | |
| ▲ | socalgal2 5 hours ago | parent | prev [-] | | Social networks will have all of those effects without any effort by the platform itself because the person with more followers has more influence so the people on the platform will do all they can to get more. I'm not excusing the platforms for bad algorithms. Rather, I believe it's naive to think that, but for the behavior of the platform itself that things would be great and rosy. No, they won't. The fact that nearly every person in the world can mass communicate to nearly every other person in the world is the core issue. It is not platform design. |
|
|
| ▲ | Wowfunhappy 8 hours ago | parent | prev | next [-] |
| > This echoes a lot of the rhetoric around "but how will facebook/twitter/etc make money?" back in the mid 2000s. The difference is that Facebook costs virtually nothing to run, at least on a per-user basis. (Sure, if you have a billion users, all of those individual rounding errors still add up somewhat.) By contrast, if you're spending lots of money per user... well look at what happened to MoviePass! The counterexample here might be Youtube; when it launched, streaming video was really expensive! It still is expensive too, but clearly Google has figured out the economics. |
| |
| ▲ | jsnell 7 hours ago | parent [-] | | You're either overestimating the cost of inference or underestimating the cost of running a service like Facebook at that scale. Meta's cost of revenue (i.e. just running the service, not R&D, not marketing, not admin, none of that) was about $30B/year in 2024. In the leaked OpenAI financials from last year, their 2024 inference costs were 1/10th of that. | | |
| ▲ | matthewdgreen 5 hours ago | parent [-] | | But their research costs are extremely high, and without a network effect that revenue is only safe until a better competitor emerges. |
|
|
|
| ▲ | overfeed 9 hours ago | parent | prev | next [-] |
| > This echoes a lot of the rhetoric around "but how will facebook/twitter/etc make money?" The answer was, and will be ads (talk about inevitability!) Can you imagine how miserable interacting with ad-funded models will be? Not just because of the ads they spew, but also the penny-pinching on training and inference budgets, with an eye focused solely on profitability. That is what the the future holds: consolidations, little competition, and models that do the bare-minimum, trained and operated by profit-maximizing misers, and not the unlimited intelligence AGI dream they sell. |
| |
| ▲ | signatoremo 8 hours ago | parent | next [-] | | It won’t be ads. Social media target consumers, so advertising is dominant. We all love free services and don’t mind some attraction. AI on the other hand target businesses and consumers alike. A bank using LLM won’t get ads. Using LLM will be cost of doing business. Do you know what they means to consumers? Price for ChatGPT will go down. | | |
| ▲ | johnnyanmac 7 hours ago | parent [-] | | >AI on the other hand target businesses and consumers alike. Okay. So AI will be using ads for consumers and make deals with the billionaires. If window 11/12 still puts ads in what is a paid premium product, I see no optimism in thinking that a "free" chatbot will not also resort to it. Not as long as the people up top only see dollar signs and not long term longevity. >Price for ChatGPT will go down. Price for ChatGPT in reality, is going up in the meanwhile. This is like hoping grocery prices come down as inflation lessens. This never happens, you can only hope to be compensated more to make up for inflation. | | |
| |
| ▲ | 6510 9 hours ago | parent | prev [-] | | I see a real window this time to sell your soul. |
|
|
| ▲ | ysavir 8 hours ago | parent | prev | next [-] |
| The thing about facebook/twitter/etc was that everyone knew how they achieve lock-in and build a moat (network effect), but the question was around where to source revenue. With LLMs, we know what the revenue source is (subscription prices and ads), but the question is about the lock-in. Once each of the AI companies stops building new iterations and just offers a consistent product, how long until someone else builds the same product but charges less for it? What people often miss is that building the LLM is actually the easy part. The hard part is getting sufficient data on which to train the LLM, which is why most companies just put ethics aside and steal and pirate as much as they can before any regulations cuts them off (if any regulations ever even do). But that same approach means that anyone else can build an LLM and train on that data, and pricing becomes a race to the bottom, if open source models don't cut them out completely. |
| |
| ▲ | umpalumpaaa 7 hours ago | parent [-] | | ChatGPT also makes money via affiliate links. If you ask ChatGPT something like "what is the best airline approved cabin luggage you can buy?" you get affiliate links to Amazon and other sites. I use ChatGPT most of the time before I buy anything these days… From personal experience (I operated an app financed by affiliate links). I can tell you that this for sure generates a lot of money. My app was relatively tiny and I only got about 1% of the money I generated but that app pulled in about $50k per month. Buying better things is one of my main use cases for GPT. | | |
| ▲ | ysavir 4 hours ago | parent [-] | | Makes you wonder whether the affiliate links are actual, valid affiliate links or just hallucinations from affiliate links it's come across in the wild | | |
| ▲ | umpalumpaaa an hour ago | parent [-] | | It clearly is a 100% custom UI logic implemented by OpenAI… They render the products in carrousels… They probably get a list of product and brand names from the LLM (for certain requests/responses) and render that in a separate UI after getting those affiliate links for those products… its not hard to do. Just slap on your affiliate ID to the links you found and you are done. |
|
|
|
|
| ▲ | rpdillon 5 hours ago | parent | prev | next [-] |
| Yep. Remember when Amazon could never make money and we kept trying to explain they were reinvesting their earnings into R&D and nobody believed it? All the rhetoric went from "Amazon can't be profitable" to "Amazon is a monopoly" practically overnight. It's like people don't understand the explore/exploit strategy trade-off. |
| |
| ▲ | mxschumacher 4 hours ago | parent [-] | | AWS is certainly super profitable, if the ecommerce business was standalone, would it really be such a cash-gusher? | | |
| ▲ | rpdillon 3 hours ago | parent [-] | | Amazon is successful because of the insanely broad set of investments they've made - many of them compound well in a way that supports their primary business. Amazon Music isn't successful, but it makes Kindle tablets more successful. This is in contrast to Google, which makes money on ads, and everything else is a side quest. Amazon has side quests, but also has many more initiatives that create a cohesive whole from the business side. So while I understand how it looks from a financial perspective, I think that perspective is distorted in terms of what causes those outcomes. Many of the unprofitable aspects directly support the profitable ones. Not always, though. |
|
|
|
| ▲ | magicalist 9 hours ago | parent | prev | next [-] |
| > LLMs might shake out differently from the social web, but I don't think that speculating about the flexibility of demand curves is a particularly useful exercise in an industry where the marginal cost of inference capacity is measured in microcents per token That we might come to companies saying "it's not worth continuing research or training new models" seems to reinforce the OP's point, not contradict it. |
| |
| ▲ | Centigonal 9 hours ago | parent [-] | | The point I'm making is that, even in the extreme case where we cease all additional R&D on LLMs, what has been developed up until now has a great deal of utility and transformative power, and that utility can be delivered at scale for cheap. So, even if LLMs don't become an economic boon for the companies that enable them, the transformative effect they have and will continue to have on society is inevitable. Edit: I believe that "LLMs transforming society is inevitable" is a much more defensible assertion than any assertion about the nature of that transformation and the resulting economic winners and losers. | | |
| ▲ | johnnyanmac 7 hours ago | parent [-] | | >what has been developed up until now has a great deal of utility and transformative power I think we'd be more screwed than VR if development ceased today. They are little more than toys right now who's most successsful outings are grifts, and the the most useful tools are simply aiding existing tooling (auto-correct). It is not really "intelligence" as of now. >I believe that "LLMs transforming society is inevitable" is a much more defensible assertion Sure. But into what? We can't just talk about change for change's sake. Look at the US in 2025 with that mentality. |
|
|
|
| ▲ | johnnyanmac 8 hours ago | parent | prev | next [-] |
| Well, given the answers to the former: maybe we should stop now before we end up selling even more of our data off to technocrats. Or worse, your chatbot shilling to you between prompts. And yes these are still businesses. If they can't find profitability they will drop it like it's hot. i.e. we hit another bubble burst that tech is known to do every decade or 2. There's no free money anymore to carry them anymore, so perfect time to burst. |
|
| ▲ | mxschumacher 4 hours ago | parent | prev | next [-] |
| what I struggle with is that the top 10 providers of LLMs all have identical* products. The services have amazing capabilities, but no real moats. The social media applications have strong network effects, this drives a lot of their profitability. * sure, there are differences, see the benchmarks, but from a consumer perspective, there's no meaningful differentiation |
|
| ▲ | scarface_74 an hour ago | parent | prev | next [-] |
| No one ever doubted that Facebook would make money. It was profitable early on, never lost that much money and was definitely profitable by the time it went public. Twitter has never been consistently profitable. ChatGPT also has higher marginal costs than any of the software only tech companies did previously. |
|
| ▲ | scarface_74 an hour ago | parent | prev | next [-] |
| No one ever doubted that Facebook would make money. It was profitable early on, never lost that much money and was definitely profitable by the time it went public. Twitter has never been consistently profitable |
|
| ▲ | 9 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | amrocha 9 hours ago | parent | prev [-] |
| The point is that if they’re not profitable they won’t be relevant since they’re so expensive to run. And there was never any question as to how social media would make money, everyone knew it would be ads. LLMs can’t do ads without compromising the product. |
| |
| ▲ | tsukikage 9 hours ago | parent | next [-] | | You’re not thinking evil enough. LLMs have the potential to be much more insidious about whatever it is they are shilling. Our dystopian future will feature plausibly deniable priming. | |
| ▲ | kridsdale3 9 hours ago | parent | prev | next [-] | | Well, they haven't really tried yet. The Meta app Threads had no ads for the first year, and it was wonderful. Now it does, and its attractiveness was only reduced by 1% at most. Meta is really good at knowing the balance for how much to degrade UX by having monetization. And the amount they put in is hyper profitable. So let's see Gemini and GPT with 1% of response content being sponsored. I doubt we'll see a user exodus and if that's enough to sustain the business, we're all good. | |
| ▲ | Centigonal 9 hours ago | parent | prev | next [-] | | I can run an LLM on my RTX3090 that is at least as useful to me in my daily life as an AAA game that would otherwise justify the cost of the hardware. This is today, which I suspect is in the upper part of the Kuznets curve for AI inference tech. I don't see a future where LLMs are too expensive to run (at least for some subset of valuable use cases) as likely. | | |
| ▲ | TeMPOraL 8 hours ago | parent [-] | | I don't even get where this argument comes from. Pretraining is expensive, yes, but both LoRAs in diffusion models and finetunes of transformers show us that this is not the be-all, end-all; there's plenty of work being done on extensively tuning base models for cheap. But inference? Inference is dirt cheap and keeps getting cheaper. You can run models lagging 6-12 years on consumer hardware, and by this I don't mean absolutely top-shelf specs, but more of "oh cool, turns out the {upper-range gaming GPU/Apple Silicon machine} I bought a year ago is actually great at running local {image generation/LLM inference}!" level. This is not to say you'll be able to run o3 or Opus 4 on a laptop next year - larger and more powerful models obviously require more hardware resources. But this should anchor expectations a bit. We're measuring inference costs in multiples of gaming GPUs, so it's not an impending ecological disaster as some would like the world to believe - especially after accounting for data centers being significantly more efficient at this, with specialized hardware, near-100% utilization, countless of optimization hacks (including some underhanded ones). |
| |
| ▲ | swat535 5 hours ago | parent | prev | next [-] | | > LLMs can’t do ads without compromising the product. It depends on what you mean by "compromise" here but they sure can inject ads.. like make the user wait 5 seconds, show an ad, then reply.. They can delay the response times and promote "premium" plans, etc Lots of ways to monetize, I suppose the question is: will users tolerate it? Based on what I've seen, the answer is yes, people will tolerate anything as long as it's "free". | |
| ▲ | overfeed 9 hours ago | parent | prev | next [-] | | > LLMs can’t do ads without compromising the product. Spoiler: they are still going to do ads, their hand will be forced. Sooner or later, investors are going to demand returns on the massive investments, and turn off the money faucet. There'll be consolidation, wind-downs and ads everywhere. | |
| ▲ | owlninja 9 hours ago | parent | prev | next [-] | | I was chatting with Gemini about vacation ideas and could absolutely picture a world where if it lists some hotels I might like, the businesses that bought some LLM ad space could easily show up more often than others. | |
| ▲ | 9 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | Geezus_42 2 hours ago | parent | prev | next [-] | | Social and search both compromised the product for ad revenue. | |
| ▲ | lotsoweiners 6 hours ago | parent | prev [-] | | To be fair, ads always compromise the product. |
|