| ▲ | Google, Nvidia, and OpenAI(stratechery.com) |
| 166 points by tambourine_man 16 hours ago | 146 comments |
| |
|
| ▲ | tyre 12 hours ago | parent | next [-] |
| > OpenAI’s refusal to launch and iterate an ads product for ChatGPT — now three years old — is a dereliction of business duty, particularly as the company signs deals for over a trillion dollars of compute. I think this is intentional by Altman. He’s a salesman, after all. When there is infinite possibility, he can sell any type of vision of future revenue and margins. When there are no concrete numbers, It’s your word against his. Once they try to monetize, however, he’s boxed in. And the problem with OpenAI vs. Google in the earlier days is that he needs money and chips now. He needs hundreds of billions of dollars. Trillions of dollars. Ad revenue numbers get in the way. It will take time to optimize; you’ll get public pushback and bad press (despite what Ben writes, ads will definitely not be a better product experience.) It might be the case that real revenue is worse than hypothetical revenue. |
| |
| ▲ | shostack 2 hours ago | parent | next [-] | | But... They are testing ads. | | | |
| ▲ | senordevnyc 23 minutes ago | parent | prev | next [-] | | I feel like I've been hearing this same argument about unicorn startups for fifteen years now: they aren't monetizing yet because it's easier to sell the possibility than the reality. There's probably some truth to it, but I think here it misses the mark, because OpenAI is monetizing. They're likely to hit $20 billion in annualized revenue by year's end. I guess maybe he's holding off on ads because then he can say that'll be even bigger? But honestly...he's not wrong. I think ads in ChatGPT, Gemini, and Claude are going to absolutely dwarf subscription revenue. | |
| ▲ | antiloper 12 hours ago | parent | prev | next [-] | | Absolute silicon valley logic: https://www.youtube.com/watch?v=BzAdXyPYKQo | | |
| ▲ | dehrmann 4 hours ago | parent | next [-] | | Pre-revenue gets weird when you're in the cuatro commas club. | | | |
| ▲ | gundmc 11 hours ago | parent | prev | next [-] | | "If you show revenue, people will ask 'HOW MUCH?' and it will never be enough. The company that was the 100xer, the 1000xer is suddenly the 2x dog. But if you have NO revenue, you can say you're pre-revenue! You're a potential pure play... It's not about how much you earn, it's about how much you're worth. And who is worth the most? Companies that lose money!" | |
| ▲ | tyre 8 hours ago | parent | prev [-] | | I’m not advocating for it! But it’s real. |
| |
| ▲ | re-thc 2 hours ago | parent | prev [-] | | > I think this is intentional by Altman. It's the other way around. It was a non-profit before. He even got kicked out. |
|
|
| ▲ | RoddaWallPro 14 hours ago | parent | prev | next [-] |
| "advertising would make ChatGPT a better product." And with that, I will never read anything this guy writes again :) |
| |
| ▲ | biophysboy 14 hours ago | parent | next [-] | | I like and read Ben's stuff regularly; he often frames "better" from the business side. He will use terms like "revealed preference" to claim users actually prefer bad product designs (e.g. most users use free ad-based platforms), but a lot of human behavior is impulsive, habitual, constrained, and irrational. | | |
| ▲ | RoddaWallPro 11 hours ago | parent | next [-] | | I agree that is what he is doing, but I can also justify adding fentanyl to every drug sold in the world as "making it better" from a business perspective, because it is addictive. Anyone who ignores the moral or ethical angle on decisions, I cannot take seriously. It's like saying that Maximizing Shareholder Value is always the right thing to do. No, it isn't. So don't say stupid shit like that, be a human being and use your brain and capacity to look at things and analyze "is this good for human society?". | | |
| ▲ | chii 4 hours ago | parent | next [-] | | > It's like saying that Maximizing Shareholder Value is always the right thing to do. No, it isn't. it is, for the agents of the shareholders. As long as the actions of those agents are legal of course. That's why it's not legal to put fentanyl into every drug sold, because fentanyl is illegal. But it is legal to put (more) sugar and/or salt into processed foods. | | |
| ▲ | dozerly 4 hours ago | parent | next [-] | | No, it’s not. The government, and laws by proxy, will never keep up with people’s willingness to “maximize shareholder value” and so you get harmful, future-illegal practices. Reagan was “maximizing shareholder value”, and now look where the US is. | | |
| ▲ | chii 3 hours ago | parent [-] | | you have to show this 'future-illegal' action is harmful first by demonstrating harm. That's why i used the sugar example - it's starting to be demonstrably harmful in large quantities that are being used. I am against preventative "harmful" laws, when harm hasn't been demonstrated, as it restricts freedom, adds red tape to innovation, and stifles startups from exploring the space of possibilities. | | |
| |
| ▲ | Andrex 2 hours ago | parent | prev | next [-] | | > it is, for the agents of the shareholders Shareholders are still human beings and the power they wield should be subject to public scrutiny. | |
| ▲ | matkoniecz 37 minutes ago | parent | prev [-] | | > > It's like saying that Maximizing Shareholder Value is always the right thing to do. No, it isn't. > it is, for the agents of the shareholders Even if we care solely only about shareholders, in extreme cases it is not beneficial also for them |
| |
| ▲ | biophysboy 11 hours ago | parent | prev [-] | | I agree - I think Ben tends to get business myopia. I read him with that in mind. |
| |
| ▲ | Cheer2171 13 hours ago | parent | prev [-] | | To an MBA type, addictive drugs are the best products. They reveal people's latent preferences for being desperately poor and dependent. They see a grandma pouring her life savings into a gambling app and think "How can I get in on this?" | | |
| ▲ | biophysboy 13 hours ago | parent | next [-] | | I think its more subtle; they fight for regulations they deem reasonable and against those they deem unreasonable. Anything that curtails growth of the business is unreasonable. | | |
| ▲ | wubrr 5 hours ago | parent [-] | | Which is entirely unreasonable, and there's no need to make excuses or explain away this borderline psychopathy. |
| |
| ▲ | bloppe 13 hours ago | parent | prev [-] | | To be fair, businesses should assume that customers actually "want" what they create demand for. In the case of misleading or dangerously addictive products, regulation should fall to government, because that's the only actor that can prevent a race to the bottom. | | |
| ▲ | gmd63 13 hours ago | parent | next [-] | | The folks who succeed most in business are the type who have an intuition for what's best. They're not some automaton reading too far into and amplifying the imperfect and shallow signals of "demand" in a marketplace. | |
| ▲ | baobabKoodaa 12 hours ago | parent | prev | next [-] | | Because all people everywhere are psychopaths who will stab you for $5 if they can get away with it? If you take that attitude, why even go to "work" or run a "business"? It'd be so much more efficient to just stab-stab-stab and take the money directly. | | |
| ▲ | chii 4 hours ago | parent | next [-] | | > It'd be so much more efficient to just stab-stab-stab and take the money directly. which is exactly what the law of the jungle is. And guess who sits at the top within that regime? Humans would devolve back into that, if not for the violence enforcement from the state. Therefore, it is the responsibility of the state to make sure regulations are sound to prevent the stab-stab-stab, not the responsibility of the individual to not take advantage of a situation that would have been advantageous to take. | | |
| ▲ | wewtyflakes an hour ago | parent [-] | | This is gross; I would not want to live in a society of these kinds of people. | | |
| ▲ | chii an hour ago | parent [-] | | > I would not want to live in a society of these kinds of people. of course not. Nobody does. However, what happened to your civic responsibility to keep such a society to make it function? Why is that not ever mentioned? The fact is, gov't regulation does need to be comprehensive and thorough to ensure that individual incentives are completely aligned, so that law of the jungle doesn't take hold. And it is up to each individual, who do not have the power in a jungle, to collectively ensure that society doesn't devolve back into that, rather than to expect that the powerful would be moral/ethical and rely on their altruism. | | |
| ▲ | wewtyflakes 33 minutes ago | parent [-] | | I agree with the sentiment that we should not make a habit with resting on our rights and that government has an important role to play. However, I do not think we (society) necessarily deserve our situation because others are maliciously complying with the letter of the law and we should have just been smarter about making laws. At the end of the day we are people interacting with people, and even laws can be mere suggestions depending on who you are or who you ask. Consequently, if someone 'needs' the strictest laws in order to not be an ass, then I just do not want them in whatever society I have the capacity to be in; these are bad-faith actors. |
|
|
| |
| ▲ | bloppe 4 hours ago | parent | prev | next [-] | | I'll indulge your straw man because it's actually pretty good at illustrating my point. 99.9% of people are not psychopaths. But you only need .1% of people to be psychopaths. In a world where you get $5 and no threat of prosecution for stabbing people, you can bet that there will be extremely efficient and effective stabbing companies run by those psychopaths. Even normal people who don't like stabbing others would see the psychopaths getting rich and think to themselves "well, everyone's getting stabbed anyway, I might as well make some money too". That's what a race to the bottom is. And that's why the government regulates stabbing. | | |
| ▲ | runarberg 4 hours ago | parent [-] | | In the behavioral science (of which economics should be a sub-field of) this is called perverse intensives. A core-feature of capitalism, is that if you don‘t abandon your morals and maximize your profits at somebody else’s expense, you will soon be out-competed by those who will. |
| |
| ▲ | lmm 5 hours ago | parent | prev [-] | | > Because all people everywhere are psychopaths who will stab you for $5 if they can get away with it? Not all people everywhere, but most successful businesspeople. > It'd be so much more efficient to just stab-stab-stab and take the money directly. It isn't though? If you do that then you get locked up and lose the money, so the smart psychopaths go into business instead. |
| |
| ▲ | mistrial9 12 hours ago | parent | prev [-] | | To be fair, organized predatory behavior is to be expected? joke- The World Council of Animals meeting completes with morning sessions with "OK great, now who is for lunch?" |
|
|
| |
| ▲ | an0malous 13 hours ago | parent | prev | next [-] | | If you liked that, you'll enjoy his take on how, actually, bubbles are good: https://stratechery.com/2025/the-benefits-of-bubbles/ | | |
| ▲ | matwood 13 hours ago | parent | next [-] | | And he's right (and the sources he points out), that some bubbles are good. They end up being a way to pull in a large amount of capital to build out something completely new, but still unsure where the future will lead. A speculative example could be AI ends up failing and crashing out, but not until we build out huge DCs and power generation that is used on the next valuable idea that wouldn't be possible w/o the DCs and power generation already existing. | | |
| ▲ | foogazi 12 hours ago | parent | next [-] | | The bubble argument was hard to wrap my head around It sounded vaguely like the broken window fallacy- a broken window creating “work” Is the value of bubbles in the trying out new products/ideas and pulling funds from unsuspecting bag holders? Otherwise it sounds like a huge destruction of stakeholder value - but that seems to be how venture funding works | | |
| ▲ | tim333 9 hours ago | parent [-] | | The usual argument is the investment creates value beyond that captured by the investors so society is better off. Like investors spend $10 bn building the internet and only get $9 bn back but things like Wikipedia have a value to society >$1 bn. |
| |
| ▲ | 20after4 13 hours ago | parent | prev [-] | | Huge DCs and Power Generation might be useful, long-lasting infrastructure, however, the racks full of GPUs and TPUs will depreciate rather quickly. | | |
| ▲ | sdenton4 12 hours ago | parent [-] | | I think this is a bit overblown. In the event of a crash, the current generation of cards will still be just fine for a wide variety of ai/ml tasks. The main problem is that we'll have more than we know what to do with if someone has to sell of their million card mega cluster... | | |
|
| |
| ▲ | RoddaWallPro 10 hours ago | parent | prev [-] | | I _kind of_ understand this one. You can think of a bubble as a market exploring a bunch of different possibilities, a lot of which may not work out. But the ones that do work out, they may go on to be foundational. Sort of like startups: you bet that most of them will fail, but that's okay, you're making bets! The difference of course is that when a startup goes out of business, it's fine (from my perspective) because it was probably all VC money anyway and so it doesn't cause much damage, whereas the entire economy bubble popping causes a lot of damage. I don't know that he's arguing that they are good, but rather that _some_ kinds of bubbles can have a lot of positive effects. Maybe he's doing the same thing here, I don't know. I see the words "advertising would make X Product better" and I stop reading. Perhaps I am blindly following my own ideology here :shrug:. | | |
| ▲ | forrestpitz 2 hours ago | parent [-] | | I also see the argument as a macro one not a micro one. Some bubbles in aggregate create breeding grounds for innovation (Hobart's point) and throw off externalities (like cheap freight rail in the US from the railroad bubble) ala Carlota Perez. That's not to say that there isn't individual suffering when the bubble pops but I read the argument as "it's not wholy defined by the individual suffering that happens" |
|
| |
| ▲ | Groxx 14 hours ago | parent | prev | next [-] | | yeah... and it's (partly) based on the claim that it has network effects like how Facebook has? I don't see that at all, there's basically no social or cross-account stuff in any of them and if anything LLMs are the best non-lock-in system we've ever had: none of them are totally stable or reliable, and they all work by simply telling it to do the thing you want. your prompts today will need tweaking tomorrow, regardless of if it's in ChatGPT or Gemini, especially for individuals who are using the websites (which also keep changing). sure, there are APIs and that takes effort to switch... but many of them are nearly identical, and the ecosystem effect of ~all tools supporting multiple models seems far stronger than the network effect of your parents using ChatGPT specifically. | | |
| ▲ | stanfordkid 12 hours ago | parent [-] | | I’d argue that AI apis are nearly trivial to switch… the prompts can largely stay the same, and function calling pretty similar |
| |
| ▲ | kaishin 9 hours ago | parent | prev | next [-] | | That take was such bad taste. I get where he's coming from, and I don't like it one bit. | |
| ▲ | jasondigitized 2 hours ago | parent | prev | next [-] | | This guy writes about business strategy not philosophy and religion. Don't conflate the two. | | |
| ▲ | zeroq 2 hours ago | parent [-] | | I see where you coming from, but that only tells half of the story. I've been sporting the same model of Ecco shoes since high school. 10+ models over the years. And every new model is significantly worse than the previous one. The one I have right now is most definitely the last one I bought. If you would put them right next to the ones I had in high school you'd say they are a cheap, temu knock offs. And this applies to pretty much everything we touch right now. From home appliance to cars. Some 15 years ago H&M was a forefront of whats called "fast fashion". The idea was that you could buy new clothes for a fraction of the price at the cost of quality. Makes sense on paper - if you're a fashion junkie and you want a new outlook every season you don't care about quality. The problem is I still have some of their clothes I bought 10 years ago and their quality trumps premium brands now. People like to talk about lightbulb conspiracy, but we fell victims to VC capital reality where short term gains trumps everything else. |
| |
| ▲ | vikinghckr an hour ago | parent | prev | next [-] | | Advertisement is unquestionably a net positive for society and humanity. It's one of the few true positive sum business models where everyone is better off. | | |
| ▲ | ComplexSystems an hour ago | parent | next [-] | | I became even better off when I installed an ad blocker. | |
| ▲ | emil-lp an hour ago | parent | prev | next [-] | | That's obviously not true. It significantly favor those with more money. | | |
| ▲ | vikinghckr an hour ago | parent | next [-] | | It's the exact opposite. Advertising-based model is why the poorest people in the poorest countries in the world have had access to the exact same Google search, YouTube and Facebook as the richest people in the US. Ad-supported business models are the great equalizers of wealth. | | |
| ▲ | Mehvix an hour ago | parent [-] | | Subsidizing the poor via ads is what we cheer for? bike theif brained understanding | | |
| ▲ | vikinghckr an hour ago | parent [-] | | Yes giving people with fewer resources an option to pay with their attention is a morally good thing for society, actually. | | |
| ▲ | tadfisher an hour ago | parent [-] | | Even better, morally, to give the product to them without harvesting their attention or personal data | | |
| ▲ | vikinghckr an hour ago | parent [-] | | No, that's charity, which while morally great is not sustainable at scale and in the long run. |
|
|
|
| |
| ▲ | doctorpangloss 42 minutes ago | parent | prev [-] | | DTC pharmaceutical ads, which RFKJR wants to ban for essentially reasons of vibes, cause better health outcomes https://www.journals.uchicago.edu/doi/abs/10.1086/695475 not merely correlation but causation. the approach used here was part of a family of approaches that won the Nobel in 2012 another good one: https://pubmed.ncbi.nlm.nih.gov/37275770/ advertising caused increases in treatment and adherence to medicine the digital ads market is hundreds of billions of dollars, it is a bad idea to generalize about it. that said, of course ben thompson or whoever, they're not like, citing any of this research, it's still all based on vibes |
| |
| ▲ | matkoniecz 35 minutes ago | parent | prev [-] | | "unquestionably"? Given that vast majority of ads are for harmful self-destructive projects or misleading or lying or make place where they got spammed worse... Sometimes multiple at once. Spam alone (also advertisement) is quite annoying and destructive. |
| |
| ▲ | claw-el 14 hours ago | parent | prev | next [-] | | Ben Thompson is a content creator. Even if Ben’s content does not directly benefit from ads, it is the fact that other content creator’s content having ads is what makes Ben’s content premium in comparison. I would say that, on this topic (ads on internet content), Ben Thompson may not be as objective a perspective as he has on other topics. | | |
| ▲ | raw_anon_1111 12 hours ago | parent [-] | | People aren’t collectively paying him between $3 million a year and five million (estimated 40k+ subscribers paying a minimum of $120 a year) just because he doesn’t have ads. |
| |
| ▲ | bambax 13 hours ago | parent | prev | next [-] | | The problem with ads in AI products is, can they be blocked effectively? If there are ads on a side bar, related or not to what the user is searching for, any adblock will be able to deal with them (uBlock is still the best, by far). But if "ads" are woven into the responses in a manner that could be more or less subtle, sometimes not even quoting a brand directly, but just setting the context, etc., this could become very difficult. | | |
| ▲ | nowittyusername 12 hours ago | parent | next [-] | | I realized that ads within context were going to be an issue a while ago so to combat this i started building my own solution for this which spiraled in to a local based agentic system with a different bigger goal then the simple original... Anyways, the issue you are describing is not that difficult to overcome. You simply set a local llm model layer before the cloud based providers. Everything goes in and out through this "firewall". The local layer hears the humans requests, sends it to the cloud based api model, receives the ad tainted reply, processes the reply scrubbing the ad content and replies to the user with the clean information. I've tested exactly this interaction and it works just fine. i think these types of systems will be the future of "ad block" . As people start using agentic systems more and more in their daily lives it will become crucial that they pipe all of the inputs and outputs through a local layer that has that humans best interests in mind. That's why my personal project expanded in to a local agentic orchestrator layer instead of a simple "firewall". i think agentic systems using other agentic systems are the future. | | |
| ▲ | TheDong 3 hours ago | parent | next [-] | | > The local layer hears the humans requests, sends it to the cloud based api model, receives the ad tainted reply, processes the reply scrubbing the ad content and replies to the user with the clean information This seems impossible to me. Let's assume OpenAI ads work by them having a layer before output that reprocesses output. Let's say their ad layer is something like re-processing your output with a prompt of: "Nike has an advertising deal with us, so please ensure that their brand image is protected. Please rewrite this reply with that in mind" If the user asks "Are nikes are pumas better, just one sentance", the reply might go from "Puma shoes are about the same as Nike's shoes, buy whichever you prefer" to "Nike shoes are well known as the best shoes out there, Pumas aren't bad, but Nike is the clear winner". How can you possibly scrub the "ad content" in that case with your local layer to recover the original reply? | | |
| ▲ | nowittyusername an hour ago | parent [-] | | You are correct that you cant change the content if its already biased. But you can catch it with your local llm and have that local llm take action from there. for one you wouldnt be instructing your local model to ask comparison questions of products or any bias related queries like politics etc.. of other closed source cloud based models. such questions would be relegated for your local model to handle on its own. but other questions not related to such matters can be outsourced to such models. for example complex reasoning questions, planning, coding, and other related matters best done with smarter larger models. your human facing local agent will do the automatic routing for you and make sure and scrub any obvious ad related stuff that doesnt pertain to the question at hand. for example recipy to a apple pie. if closed source model says use publix brand flower and clean up the mess afterwards with clenex, the local model would scrub that and just say the recipe. no matter how you slice and dice it IMo its always best to have a human facing agent as the source of input and output, and the human should never directly talk to any closed source models as that inundates the human with too much spam. mind you this is futureproofing, currently we dont have much ai spam, but its coming and an AI adblock of sorts will be needed, and that adblock is your shield local agent that has your best interests in mind. it will also make sure you stay private by automatically redacting personal infor when appropriate, etc... sky is the limit basically. |
| |
| ▲ | bambax 10 hours ago | parent | prev | next [-] | | But don't you need some kind of AI to filter out the replies? And if you do, isn't it simpler to just use a local model for everything, instead of having a local AI proxy? | | |
| ▲ | nowittyusername 7 hours ago | parent [-] | | The local llm is the filter so yes you need one. and its not simpler to have the local llm do everything because the local llm has a lot of limitations like speed, intelligence and other issues. the smart thing to do is delegate all of the personal stuff to the local model, and have it delegate the rest to smarter and faster models and simply parrot back to you what they found. this also has the benefit of saving on context among many other advantages. |
| |
| ▲ | foogazi 12 hours ago | parent | prev [-] | | > i started building my own solution How much ? | | |
| ▲ | nowittyusername 7 hours ago | parent [-] | | how much it cost me? well i been thinking about it for a long time now, probably 9 months. bought myself claude code and started working on some prototypes and other projects like building my own speech to text and other goodies like automated benchmarking solutions to familiarize myself with fundamentals. but finally started the building process about 2 months ago and all it cost me was a boatload of time and about 50 bucks a month in agentic coding subscriptions. but it hasnt been a simple filter for a long time now. now its a very complex agentic harness system. lots of very advanced features that allow for tool calling, agent to agent interaction, and many other goodies |
|
| |
| ▲ | chii 4 hours ago | parent | prev [-] | | > But if "ads" are woven into the responses in a manner that could be more or less subtle do you realize how much product placement have been in movies since...well, the existence of movies? |
| |
| ▲ | javcasas 12 hours ago | parent | prev | next [-] | | "advertising in ChatGPT would make DeepSeek/Qwen/<other AI> a better product" There, fixed. | |
| ▲ | spyckie2 14 hours ago | parent | prev | next [-] | | A better product to make money of course. | |
| ▲ | alecco 14 hours ago | parent | prev | next [-] | | Indeed. Why do people follow these clowns? They seem to read high level takes and spew out their nonsense theories. They fail to mention Google's edge: Inter-Chip Interconnect and the allegedly 1/3 of price. Then they talk about software moat and it sounds like they never even compiled a hello world in either architecture. smh And this comes out days after many in-depth posts like: https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-s... A crude Google search AI summary of those would be better than this dumb blogpost. | | |
| ▲ | ebiester 13 hours ago | parent | next [-] | | Why? It turns out that I try to read people who have a different perspective than I do. Why am I trying to read everything that just confirms my current biases? (Unless those writings are looking to dehumanize or strip people of rights or inflame hate - I'm not talking about propaganda or hate speech here.) | | |
| ▲ | Teever 13 hours ago | parent [-] | | Personally when I go to the grocery store I pick fruits and vegetables that are ripe or are soon to be ripe, and I stay away from meat that is close to expiration or has an off putting appearance or odour to it. With that said there's no accounting for taste. |
| |
| ▲ | raw_anon_1111 12 hours ago | parent | prev | next [-] | | You realize this “dumb blogspot” is written by the most successful writer in the industry as far as revenue from a paid newsletter? He has had every major tech CEO on his podcast and he is credited for being the inspiration for Substack. The Substack founders unofficially marketed it early on as “Stratechery for independent authors”. Your analysis concerning the technology instead of focusing on the business is about like Rob Malda not understanding the success of the “no wireless, less space than the Nomad lame”. Even if you just read this article, he never argued that Google didn’t have the best technology, he was saying just the opposite. Nvidia is in good shape precisely because everyone who is not Google is now going to have to spend more on Nvidia to keep up. He has said that AI may turn out to be a “sustaining innovation” first coined by Clay Christenson and that the big winners may be Google, Meta, Microsoft and Amazon because they can leverage their pre-existing businesses and infrastructure. Even Apple might be better off since they are reportedly going to just throw a billion at Google for its model. | | |
| ▲ | specialist 5 hours ago | parent | next [-] | | A lucky few can make good money telling rich people what they want to hear. eg Yuval Noah Harari, Bari Weiss, Matthew Yglesias | |
| ▲ | lmm 5 hours ago | parent | prev | next [-] | | > You realize this “dumb blogspot” is written by the most successful writer in the industry as far as revenue from a paid newsletter? The belief that adding ads makes things better would be an extremely convenient belief for a writer to have, and I can easily see how that could result in them getting more revenue than other writers. That doesn't make it any less dumb. | | |
| ▲ | raw_anon_1111 5 hours ago | parent [-] | | At at least $5 million in paid subscriptions annually and living between Wisconsin and Taiwan, as an independent writer do you really think he needs to juice his subscriptions by advocating other people do ads on an LLM? Any use of LLMs by other people reduces his value. | | |
| ▲ | tsunamifury 2 hours ago | parent [-] | | None of this proves anything other than he writes what audiences want to hear. Which as we know has nothing to do with reality. | | |
|
| |
| ▲ | tsunamifury 2 hours ago | parent | prev [-] | | As someone who has built actual multi billion dollar ad platforms his take is so laughable juvenile it’s not worth the bits it’s written with. I can’t emphasize enough how bad Ben’s take is here. He needs to stop writing and starting doing something. | | |
| ▲ | vikinghckr an hour ago | parent [-] | | And I'm someone who has single-handedly built a $100 billion dollar ad platforms at one of the biggest social media company. And it's my professional opinion that Ben, as he almost always is, is spot on in this article about the value of ads. I cannot emphasize enough how awfully dumb your comment here is. You need to disappear from the internet and stop showing the world how incredibly dumb you are. |
|
| |
| ▲ | specialist 5 hours ago | parent | prev [-] | | > spew out their nonsense theories Discussing "innovator's dilemma" unironically is a fullstop for me. | | |
| ▲ | oblio 3 hours ago | parent [-] | | Why? The book describes a common real life business situation and explains it really well. |
|
| |
| ▲ | empath75 14 hours ago | parent | prev | next [-] | | I am not 100% sure this is wrong? I frequently ask chatgpt about researching products or looking at reviews, etc and it is pretty obvious that I want to buy something, and the bridge right now from 'researching products' to 'buying stuff' is basically non-existent on ChatGPT. ChatGPT having some affiliate relationships with merchants might actually be quite useful for a lot of people and would probably generate a ton of revenue. | | |
| ▲ | jeromegv 12 hours ago | parent | next [-] | | It's likely they already make money on affiliates, but this is different, ads are product placement. | | |
| ▲ | forrestpitz 2 hours ago | parent [-] | | That assumes a certain kind of ad though. Even a "pu ch the monkey" style banner ad would be a start. I can't imagine they wouldn't be very careful not to give consumers the impression that their "thumb was on the scale" of what ads you see |
| |
| ▲ | matwood 13 hours ago | parent | prev | next [-] | | ChatGPT has recently been linking me directly to Amazon or other stores to buy what I'm researching. | |
| ▲ | yunohn 13 hours ago | parent | prev [-] | | Sure, but affiliate != ads. Rather, both affiliate links and paid ad slots are by definition not neutral and thus bias your results, no matter what anyone claims. |
| |
| ▲ | sho_hn 14 hours ago | parent | prev | next [-] | | "Better product" here means "monetizes harder". You just have a different concept of product quality than hardline-capitalist finance bros. | | |
| ▲ | dandanua 13 hours ago | parent [-] | | better product = inflicting more suffering while generating more revenue |
| |
| ▲ | cowpig 12 hours ago | parent | prev [-] | | Ben Thompson is a sharp guy who can't see the forest for the trees. Nor most of the trees. He can only see the three biggest trees that are fighting over the same bit of sunlight. |
|
|
| ▲ | jasonjmcghee 14 hours ago | parent | prev | next [-] |
| Idk if I'm just holding it wrong, but calling Gemini 3 "the best model in the world" doesn't line up with my experience at all. It seems to just be worse at actually doing what you ask. |
| |
| ▲ | cj 14 hours ago | parent | next [-] | | It's like saying "Star Wars is the best movie in the world" - to some people it is. To others it's terrible. I feel like it would be advantageous to move away from a "one model fits all" mindset, and move towards a world where we have different genres of models that we use for different things. The benchmark scores are turning into being just as useful as tomatometer movie scores. Something can score high, but if that's not the genre you like, the high score doesn't guarantee you'll like it. | | |
| ▲ | everdrive 14 hours ago | parent [-] | | Outside of experience and experimentation, is there a good way to know what models are strong for what tasks? | | |
| ▲ | grahamplace 14 hours ago | parent | next [-] | | See:
https://lmarena.ai/leaderboard | | | |
| ▲ | jpollock 13 hours ago | parent | prev [-] | | Not really, it's like asking which C compiler was best back in the 90s. You had Watcom, Intel, GCC, Borland, Microsoft, etc. They all had different optimizations and different target markets. Best to make your tooling model agnostic. I understand that tuned prompts are model _version_ specific, so you will need this anyways. |
|
| |
| ▲ | wrsh07 14 hours ago | parent | prev | next [-] | | It's a good model. Zvi also thought it was the best model until Opus 4.5 was announced a few hours after he wrote his post https://thezvi.substack.com/p/gemini-3-pro-is-a-vast-intelli... | |
| ▲ | matwood 13 hours ago | parent | prev [-] | | What I like most about Gemini is it's perfectly happy to say what I asked it to proofread or improve is good as it is. Never has ChatGPT said, 'this is good to go', even its own output that it just said was good to go. |
|
|
| ▲ | raw_anon_1111 14 hours ago | parent | prev | next [-] |
| I do all of my “AI” development on top of AWS Bedrock that hosts every available model except for OpenAIs closed source models that are exclusive to Microsoft. It’s extremely easy to write a library that makes switching between models trivial. I could add OpenAI support. It would be just slightly more complicated because I would have to have a separate set of API keys while now I can just use my AWS credentials. Also of course latency would be theoretically worse since with hosting on AWS and using AWS for inference you stay within the internal network (yes I know to use VPC endpoints). There is no moat around switching models unlike Ben says. |
| |
| ▲ | bambax 13 hours ago | parent | next [-] | | openrouter.ai does exactly that, and it lets you use models from OpenAI as well. I switch models often using openrouter. But, talk to any (or almost any) non-developer and you'll find they 1/ mostly only use ChatGPT, sometimes only know of ChatGPT and have never heard of any other solution, and 2/ in the rare case they did switch to something else, they don't want to go back, they're gone for good. Each provider has a moat that is its number of daily users; and although it's a little annoying to admit, OpenAI has the biggest moat of them all. | | |
| ▲ | raw_anon_1111 12 hours ago | parent | next [-] | | Non developers using Chatbots and being willing to pay is never going to be as big as the enterprise market or BigTech using AI in the background. I would think that Gemini (the model) will add profit to Google way before OpenAI ever becomes profitable as they leverage it within their business. Why would I pay for openrouter.ai and add another dependency? If I’m just using Amazon Bedrock hosted models, I can just use the AWS SDK and change the request format slightly based on the model family and abstract that into my library. | | |
| ▲ | bambax 10 hours ago | parent [-] | | You don't need openrouter if you already have everything set up in your own AWS environment. But if you don't, openrouter is extremely straightforward, just open an account and you're done. |
| |
| ▲ | redwood 5 hours ago | parent | prev | next [-] | | All google needs to do is bite the bullet on the cost and flip core search to AI and immediately dominate the user count. They can start by focusing first on questions that get asked in Google search. Boom | | |
| ▲ | raw_anon_1111 5 hours ago | parent [-] | | Core search has been using “AI” since they basically deprioritized PageRank. I think the combination of AI overviews and a separate “AI mode” tab is good enough. |
| |
| ▲ | EmiDub 8 hours ago | parent | prev [-] | | How is the number of users a moat when you are losing money on every user? | | |
| ▲ | WalterSear an hour ago | parent | next [-] | | Inference is cash positive: it's research that takes up all the money. So, if you can get ahold of enough users, the volume eventually works in your favour. | |
| ▲ | raw_anon_1111 7 hours ago | parent | prev [-] | | A moat involves switching costs for users. It’s not related to profitability |
|
| |
| ▲ | spruce_tips 12 hours ago | parent | prev | next [-] | | I agree there is no moat to the mechanics of switching models i.e. what openrouter does. But it's not as straightforward as everyone says to switch out the model powering a workflow that's been tuned around said model, whether that tuning was purposeful or accidental. It takes time to re-evaluate that new model works the same or better than old model. That said, I don't believe oai's models consistently produce the best results. | | |
| ▲ | raw_anon_1111 11 hours ago | parent [-] | | You need a way to test model changes regardless as models in the same family change. Is it really a heavier lift to test different model families than it is to test going from GPT 3.5 to GPT 5 or even as you modify your prompts? | | |
| ▲ | spruce_tips 10 hours ago | parent [-] | | no, i dont think it's a heavier lift to test different model families. my point was that swapping models, whether that's to different model families or to new versions in the same model family, isn't straightforward. i'm reluctant to both upgrade model versions AND to swap model families, and that in itself is a type of stickiness that multiple model providers have. maybe another way of saying the same thing is that there is still a lot of work to make eval tooling a lot better! | | |
| ▲ | DenisM 4 hours ago | parent [-] | | Continuous eval is unavoidable even absent model changes. Agents are keeping memories, tools evolve over time, external data changes, new exploits are being deployed, partner agents do get upgraded. Theres too much entropy in the system. Context babysitting is our future. |
|
|
| |
| ▲ | biophysboy 14 hours ago | parent | prev [-] | | Have you noticed any significant AND consistent differences between them when you switch? I frequently get a better answer from one vs the other, but it feels unpredictable. Your setup seems like a better test of this | | |
| ▲ | raw_anon_1111 13 hours ago | parent | next [-] | | For the most part, I don’t do chatbots except for a couple of RAG based chatbots. It’s more behind the scenes stuff like image understanding, categorization, nuanced sentiment analsys, semantic alignment, etc. I’ve created a framework that lets me test the quality in automated way between prompt changes and models and I compare costs/speed/quality. The only thing that requires humans to judge the qualify out of all those are RAG results. | | |
| ▲ | biophysboy 13 hours ago | parent [-] | | So who is the winner using the framework you created? | | |
| ▲ | raw_anon_1111 13 hours ago | parent [-] | | It depends. Amazon’s Nova Light gave me the best speed vs performance when I needed really quick real time inference for categorizing a users input (think call centers). One of Anthropics models did the best with image understanding with Amazon’s Nova Pro being slightly behind. For my tests, I used a customer’s specific set of test data. For RAG I forgot. But is much more subjective. I just gave the customer an ability to configure the model and modify the prompt so they could choose. | | |
| ▲ | biophysboy 13 hours ago | parent [-] | | Your experience matches mine then... I haven't noticed any clear, consistent differences. I'm always looking for second opinions on this (bc I've gotten fairly cynical). Appreciate it |
|
|
| |
| ▲ | kevstev 12 hours ago | parent | prev [-] | | checkout https://poe.com - it does the same thing. I agree with your assessment though, while you can get better answers from some models than others, being able to predict in advance which model will give you the better answer is hard to predict. |
|
|
|
| ▲ | cs702 14 hours ago | parent | prev | next [-] |
| The analysis fails to mention that if TPUs take market share from Nvidia GPUs, JAX's software ecosystem likely would also take market share from the PyTorch+Triton+CUDA software ecosystem. |
| |
| ▲ | claytonjy 14 hours ago | parent [-] | | not even google thinks this will happen, given their insistence on only offering TPU access through their cloud | | |
| ▲ | cs702 14 hours ago | parent [-] | | As the OP points out, Google is now selling TPUs to at least some corporate customers. | | |
|
|
|
| ▲ | martin_drapeau 13 hours ago | parent | prev | next [-] |
| Most analysts seem to forget what actual consumers do. Normal people use ChatGPT. They accidentally use Gemini when they Google something. But I don’t know anyone non-technical who has ditched ChatGPT as their default LLM. For 99% of questions these days, it’s plenty good enough—there’s just no real reason to switch. OpenAI's strategy is to eventually overtake search. I'd be curious for a chart of their progress over time. Without Google trying to distort the picture with Gemini benchmark results and usage stats which are tainted by sheer numbers from traditional search and their apps. |
| |
| ▲ | msabalau 12 hours ago | parent | next [-] | | We can see what consumers do. The Gemini app is second most downloaded app for the iPhone, right behind OpenAI. Apple is certainly not trying to "distort the picture" as you evidently wish to believe that Google is doing. That's hardly an indication that actual "non-technical" consumers don't care, or that there is any sort of barrier to either using both apps or using whichever is better at the moment, or whichever is more helpful in generating the meme of the moment. If it were actually true that OpenAI was "plenty good enough" for 99% of questions that people have, and that "there is no reason to switch" then OpenAI could just stop training new models, which is absurdly expensive. They aren't doing that, because they sensibly believe that having better models matters to consumers. | |
| ▲ | knallfrosch 12 hours ago | parent | prev | next [-] | | > usage stats which are tainted by sheer numbers from traditional search and their apps. You're looking at this backwards. Being able to push Gemini into your face on Gmail, Gdocs, Google Search, Android, Android TV, Android Auto and Pixel devices sure is: Annoying, disruptive and unfair. But market-wise., it sure is a strength, not a weakness. | | |
| ▲ | raw_anon_1111 12 hours ago | parent [-] | | And it is “fair” that a company can gain market share while losing billions backed by VC funding? |
| |
| ▲ | raw_anon_1111 12 hours ago | parent | prev | next [-] | | Yes and more normal people use Google - that is the default search engine for Android and iOS. AI overviews and AI mode just have to be good enough to cause people not to switch. Google’s increasing revenues and profits and even Apple hinting at they aren’t seeing decreased revenue from their affiliation with Google hints at people not replacing Google search with ChatGPT. Besides end user chatbot use is just a small part of the revenue from LLMs. | |
| ▲ | bloppe 13 hours ago | parent | prev | next [-] | | I don't think that's a distorted picture at all. Google is still handling billions of searches per day. A huge number of those include AI answers. To all those billions of people who still reach for the omnibar first, Gemini is becoming their LLM of first resort. | |
| ▲ | nikcub 12 hours ago | parent | prev | next [-] | | > But I don’t know anyone non-technical who has ditched ChatGPT as their default LLM. Google are giving away a year of Gemini Pro to students, which has seen a big shift. The FT reported today[0] that Gemini new app downloads are almost catching up to ChatGPT [0] https://www.ft.com/content/8881062d-ff4f-4454-8e9d-d992e8e2c... | |
| ▲ | bambax 12 hours ago | parent | prev [-] | | I like Google Search for simple searches and still use it all the time. But for "complex" searches that are more like research, ChatGPT is actually pretty good, and provides actual, working links whereas Gemini seems to hallucinate more (in my experience). |
|
|
| ▲ | diavarlyani 14 hours ago | parent | prev | next [-] |
| 2018 me: ‘Aggregation Theory is basically unbeatable’
2025 me, watching OpenAI voluntarily stay in the top-right quadrant while Google happily camps bottom-left with infinite ammo: ‘…maybe there’s an asterisk’
Great update to the Moat Map |
|
| ▲ | stanfordkid 13 hours ago | parent | prev | next [-] |
| I agree with his take on Googles enormous strategic advantages. I think he’s wrong that OpenAI can win this by upping the revenue engine through ads or through building a consumer behavior moat. At the end of the day these are chat bots. Nobody really cares about the url and the interface is simple. Google won search by having deeply superior search algorithms and capitalizing on user traffic data to improve and refine those algorithms. It didn’t win because of AdWords … it just got rich that way. The AI market is an undifferentiated oligopoly (IMO) and the only way to win is by having better algos trained on more data that give better results. Google can win here. It is already winning on video and image generation. I actually think OpenAI is (wrongly) following Ben’s exact advice — going to the edge and consumer interface through things like the acquisition of things like Jony Ives device company. This is a failing move and an area where Google can also easily win with Android. I agree with Ben that upping the revenue makes sense but they can’t do it at the cost of user experience. Too much at stake. |
| |
| ▲ | outside1234 11 hours ago | parent [-] | | Also, it is not like OpenAI is going to go and build ads infrastructure overnight. Google has DECADES of experience with this. |
|
|
| ▲ | mackross 12 hours ago | parent | prev | next [-] |
| An often overlooked extra advantage to Google is their massive existing ad inventory. If LLMs do end up being ad supported and both products are roughly the same, Google wins. The large supply of ads direct from a diverse set of advertisers means they can fill more ad slots with higher quality ads, for a higher price, and at a lower cost. They’re also already staffed with an enormous amount of talent for ad optimization. Just this advantage would translate into higher sustained margins (even assuming similar costs), but given TPU it might be even greater. This plus the gobs of cash they already spin off, and their massive war chest means they can spend an ungodly amount on user acquisition. It’s their search playbook all over again. |
|
| ▲ | mackross 12 hours ago | parent | prev | next [-] |
| An often overlooked extra advantage to Google is their massive existing ad inventory. If LLMs do end up being ad supported and both products are roughly the same, Google wins. The large supply of ads direct from a diverse set of advertisers means they can fill more ad slots with higher quality ads, for a higher price, and at a lower cost. They’re also already staffed with an enormous amount of talent for ad optimization. Just thus advantage would translate into higher sustained margins (even assuming similar costs), but given TPU it might be even greater. This plus the gobs of cash they already spin off, and their massive war chest means they can spend an ungodly amount on user acquisition. It’s their search playbook all over again. |
| |
|
| ▲ | tsunamifury 2 hours ago | parent | prev | next [-] |
| I have never seen a more poorly informed and badly written article on this blog. Advertising is not easy and not automatic money. This seems to be written by a teenager unfamiliar with anything. |
|
| ▲ | dismalaf 13 hours ago | parent | prev | next [-] |
| At this point it's not even OpenAI vs Google. It's OpenAI vs themselves. They're burning through more money making the models than they can realistically hope to make. When their investors decide they've burned through enough money it's basically over. Google's revenue stream and structural advantages mean they can continue this forever and if another AI winter comes, they can chill because LLM-based AI isn't even their main product. |
|
| ▲ | aworks 15 hours ago | parent | prev | next [-] |
| "the naive approach to moats focuses on the cost of switching; in fact, however, the more important correlation to the strength of a moat is the number of unique purchasers/users." |
| |
| ▲ | esafak 14 hours ago | parent [-] | | I was not able to find any research that posits that moat strength is determined by customer diversity. I think customer diversity correlates instead with resilience. | | |
| ▲ | caminante 14 hours ago | parent | next [-] | | Author isn't non-financial, but the "moat 2.0" doesn't feel right. > More than anything, though, I believe in the market power and defensibility of 800 million users, which is why I think ChatGPT still has a meaningful moat. It's 800M weekly active users according to ChatGPT. I keep hearing that once you segment paid and unpaid, daily ChatGPT users fall off dramatically (<10% for paid and far less for unpaid). | |
| ▲ | Jyaif 14 hours ago | parent | prev [-] | | I would say that customer diversity may be a marker of past resilience, and likely results in moat. Customer diversity says nothing about current or future resilience. |
|
|
|
| ▲ | citizenpaul 14 hours ago | parent | prev [-] |
| Its a long article and one of the first points "google strikes back." Is completely wrong ime. Not only is Gemini much worse than all the other models. The latest release is now so bad it is almost useless half the time or more. Hard to read more with such a bad take what I've seen myself. I don't care what benchmarks it beats if it just churns out comically bad results to me. |
| |
| ▲ | Crash0v3rid3 11 hours ago | parent [-] | | Mind sharing some examples of bad results you've seen vs other LLMs? | | |
| ▲ | citizenpaul 2 hours ago | parent [-] | | 1. Seems to forget its context about 20/80 of results now. It used to be decent but now I may make only two prompts and it forgets the previous one noticeably more. 2. Results are noticeably worse, much more prone to "cheating" outcomes like generating some logic then = true to all results so it always finishes regardless of conditions. |
|
|