| ▲ | santadays a day ago |
| I've seen the following quote. "The energy consumed per text prompt for Gemini Apps has been reduced by 33x over the past 12 months." My thinking is that if Google can give away LLM usage (which is obviously subsidized) it can't be astronomically expensive, in the realm of what we are paying for ChatGPT. Google has their own TPUs and company culture oriented towards optimizing the energy usage/hardware costs. I tend to agree with the grandparent on this, LLMs will get cheaper for what we have now level intelligence, and will get more expensive for SOTA models. |
|
| ▲ | lelanthran a day ago | parent | next [-] |
| Google is a special case - ever since LLMs came out I've been pointing out that Google owns the entire vertical. OpenAI, Anthropic, etc are in a race to the bottom, but because they don't own the vertical they are beholden to Nvidia (for chips), they obviously have less training data, they need constant influsx of cash just to stay in that race to the bottom, etc. Google owns the entire stack - they don't need nvidia, they already have the data, they own the very important user-info via tracking, they have millions, if not billions, of emails on which to train, etc. Google needs no one, not even VCs. Their costs must be a fraction of the costs of pure-LLM companies. |
| |
| ▲ | viraptor a day ago | parent | next [-] | | > OpenAI, Anthropic, etc are in a race to the bottom There's a bit of nuance hiding in the "etc". Openai and anthropic are still in a race for the top results. Minimax and GLM are in the race to the bottom while chasing good results - M2.1 is 10x cheaper than Sonnet for example, but practically fairly close in capabilities. | | |
| ▲ | lelanthran 17 hours ago | parent [-] | | > There's a bit of nuance hiding in the "etc". Openai and anthropic are still in a race for the top results. That's not what is usually meant by "race to the bottom", is it? To clarify, in this context I mean that they are all in a race to be the lowest margin provider. They re at the bottom of the value chain - they sell tokens. It's like being an electricity provider: if you buy $100 or electricity and produce 100 widgets, which you sell for $1k each, that margin isn't captured by the provider. That's what being at the bottom of the value chain means. | | |
| ▲ | viraptor 16 hours ago | parent [-] | | I get what it means, but it doesn't look to me like they're trying that yet. They don't even care that people buy multiple highest level plans to rotate them every week, because they don't provide a high enough tier for the existing customers. I don't see any price war happening. We don't know what their real margins are, but I don't see the race there. What signs do you see that Anthropic and Openai are in the race to the bottom? | | |
| ▲ | lelanthran 15 hours ago | parent [-] | | > I don't see any price war happening. What signs do you see that Anthropic and Openai are in the race to the bottom? There doesn't need to be signs of a race (or a price-war),only signs of commodification; all you need is a lack of differentiation between providers for something to turn into a commodity. When you're buying a commodity, there's no big difference between getting your commodity delivered by $PROVIDER_1 and getting your commodity delivered by $PROVIDER_2. The models are all converging quality-wise. Right now the number of people who swear by OpenAI models are about the same as the number of people who swear by Anthropic models, which are about the same as the number of people who swear by Google's models, etc. When you're selling a commodity, the only differentiation is in the customer experience. Right now, sure, there's no price war, but right now almost everyone who is interested are playing with multiple models anyway. IOW, the target consumers are already treating LLMs as a commodity. |
|
|
| |
| ▲ | flyinglizard a day ago | parent | prev [-] | | Gmail has 1.8b active users, each with thousands of emails in their inbox. The number of emails they can train of is probably in the trillions. | | |
| ▲ | brokencode a day ago | parent [-] | | Email seems like not only a pretty terrible training data set, since most of it is marketing spam with dubious value, but also an invasion of privacy, since information could possibly leak about individuals via the model. | | |
| ▲ | palmotea a day ago | parent [-] | | > Email seems like not only a pretty terrible training data set, since most of it is marketing spam with dubious value Google probably even has an advantage there: filter out everything except messages sent from valid gmail account to valid gmail account. If you do that you drop most of the spam and marketing, and have mostly human-to-human interactions. Then they have their spam filters. | | |
| ▲ | Terr_ a day ago | parent [-] | | I'd upgrade that "probably" leak to "will absolutely" leak, albeit with some loss of fidelity. Imagine industrial espionage where someone is asking the model to roleplay a fictional email exchange between named corporate figures in a particular company. |
|
|
|
|
|
| ▲ | SoftTalker a day ago | parent | prev | next [-] |
| > Google has ... company culture oriented towards optimizing the energy usage/hardware costs. Google has a company culture of luring you in with freebies and then mining your data to sell ads. |
|
| ▲ | AdrianB1 a day ago | parent | prev | next [-] |
| > if Google can give away LLM usage (which is obviously subsidized) it can't be astronomically expensive There is a recent article by Linus Sebastian (LTT) talking about Youtube: it is almost impossible to support the cost to build a competitor because it is astronomically expensive (vs potential revenue) |
|
| ▲ | SecretDreams a day ago | parent | prev [-] |
| I do not disagree they will get cheaper, but I pointing out that none of this is being reflected in hardware pricing. You state LLMs are becoming more optimized (less expensive). I agree. This should have a knockon effect on hardware prices, but it is not. Where is the disconnect? Are hardware prices a lagging indicator? Is Nvidia still a 5 trillion dollar company if we see another 33x improvement in "energy consumed per text prompt"? |
| |
| ▲ | zozbot234 a day ago | parent [-] | | Jevon's paradox. As AI gets more efficient its potential scope expands further and the hardware it runs on becomes even more valuable. BTW, the absolute lowest "energy consumed per logical operation" is achieved with so-called 'neuromorphic' hardware that's dog slow in latency terms but more than compensates with extreme throughput. (A bit like an even more extreme version of current NPU/TPUs.) That's the kind of hardware we should be using for AI training once power use for that workload is measured in gigawatts. Gaming-focused GPUs are better than your average CPU, but they're absolutely not the optimum. |
|