| ▲ | JohnMakin 2 days ago |
| I've sort of lost some respect for ed that I had early on in the hype cycle - he's still right about some things, but I can see him slowly and subtly retreating from his strong position, held even a few months ago, that these things will never ever be useful for anything and it's all a scam because they don't actually do anything at all except burn money. He would say it like 8 times a monologue. I remember one podcast maybe ~6 months ago he brought a developer skeptic on, and was trying to get him to say it wasn't actually useful for coding, and the dev was like "maybe not as advertised, but I definitely use it and it is useful to me" and he pivoted off the topic very quickly. It seems he realizes he was wrong about that and has pivoted slowly to, "well, maybe they work sometimes, but the cost isn't justified." Which is a reasonable question! I just find his style of never admitting when he is wrong off putting and the way he presents things as absolute fact, when he's guessing like the rest of us. He was right about a lot, wrong about a lot, it's okay to admit that, I don't think his fan base would care. |
|
| ▲ | chromacity 2 days ago | parent | next [-] |
| That's essentially how you become an online pundit. The internet rewards provocative takes. If you have a tendency to doubt yourself and revise your views, then (a) your views become less provocative and thus less likely to translate into click-worthy headlines; (b) you end up biting your tongue or saying "I don't know" often enough that is becomes impossible to keep up with the requisite weekly publication schedule. Which is to say, it's easy to scapegoat this guy, but I think his approach is not any different from other "opinion piece" bloggers that we all tend to reshare. |
| |
| ▲ | bdangubic a day ago | parent [-] | | > That's essentially how you become an online pundit. The internet rewards provocative takes. internet rewards provocative takes - plural. this mate has a single take and writes more about this one thing than jrr tolkien did in all his works combined |
|
|
| ▲ | great_tankard 2 days ago | parent | prev | next [-] |
| This is exactly how I feel about him too. I also find his "number big" approach to writing ("check out my 18,000 word blog about something I'm learning about in real time") off-putting, so I've completely stopped engaging with it. We need better critics of the industry. |
| |
| ▲ | cyclonereef 2 days ago | parent | next [-] | | I always gets the sense around a third of the way through his articles that whoever reads his drafts just gives up. It goes from wordy and repetitive to wordy, repetitive, filled with rage-bait exasperation and more filler than content. Give the man a 2000 word budget and he could probably write a better article and cover the same information | |
| ▲ | chromacity a day ago | parent | prev | next [-] | | > We need better critics of the industry. There's plenty, but they don't have enough material to post once a week. And if you don't post once a week, you don't end up on HN once a month. As simple as that. Looking at the blogs that show up on HN regularly, the usual hit rate is 10-25%. | | |
| ▲ | great_tankard a day ago | parent [-] | | Yes, but the HN crowd isn't Zitron's main audience. He appeals to smart people who don't understand anything about computing or business. I do not mean this in a disparaging way; it's a curious audience that has somewhat justifiable moral and aesthetic objections to LLMs and especially the companies peddling them. The problem is that Zitron has charm, an authoritative voice and a very aggressive online presence. That's a difficult combination to compete against. | | |
| ▲ | JohnMakin 19 hours ago | parent [-] | | I started following early 2024, and the scene was much different - he was mostly a lone voice against the insane hype at the time, which definitely was not delivering. I liked hearing that opinion, amongst a wave of bullshit and slop and coming off the blockchain mania was very difficult to stomach. The landscape has changed a lot, but his content has remained mostly the same, maybe much more aggressive and less curious (in the beginning he would entertain other viewpoints more often), but since the tech itself has changed around him, so his repetive shtick falls a lot more flat than it used to because he is completely unwilling to entertain any other position other than the one that established his blog/show. |
|
| |
| ▲ | Lerc 2 days ago | parent | prev [-] | | >We need better critics of the industry. I often wonder if there are people promoting people like Zitron because they want the poor quality criticisms to be prominent enough to be the ones that they face most often. It must be a lot easier than having to address valid criticisms. |
|
|
| ▲ | tuveson 2 days ago | parent | prev | next [-] |
| I’m remember when CrowdStrike caused that huge outage, he basically blamed Windows / Microsoft for it. I kind of stopped taking him seriously after that. I more-or-less agree with his point of view, but he seems more interested in selling outrage rather than journalism. |
| |
| ▲ | JohnMakin 2 days ago | parent [-] | | I agree. Early on, it felt more like journalism, then I think he blew up and found something that works. If you challenge him on this, he will call you insecure or jealous, which I also find obnoxious[0]. I also find it highly ironic that all the ads on his podcast, at least on apple, are selling AI related products. [0] - https://www.reddit.com/r/BetterOffline/comments/1p5zv33/why_... | | |
| ▲ | CodingJeebus 2 days ago | parent [-] | | FWIW, iHeart Radio probably manages his ad runs. He likely has no say over which ads get run on his show, and as I understand, the podcast advertising market has slowed tremendously in 2026. Podcasting platforms can't be as picky as they used to be. | | |
| ▲ | causalmodels 2 days ago | parent | next [-] | | He may not have control over the podcast spots, but his PR firm does have several AI companies as clients. | |
| ▲ | lesostep 19 hours ago | parent | prev [-] | | iHeart Radio ads are usually from other podcasts though. I listen through PodBean and all their ads are for other shows. iHeart is so antiAI they added "Guaranteed Human" in the middle of every podcast they stream. Does apple run additional ads on podcasts? |
|
|
|
|
| ▲ | mrandish 2 days ago | parent | prev | next [-] |
| I've only read a few of his pieces here and there and had just assumed he was an AI skeptic, so I never thought his position was LLMs would never be good for anything at any price. That's a pretty extreme thing for any serious person to have ever claimed. Frankly, it seems more like a straw man exaggeration of AI skepticism. I consider myself to generally be an AI skeptic, but to me that means skepticism about: 1) Nearer-term investment returns on AI businesses and data center build-outs. 2) Claims that LLMs are now (or soon will) rapidly displace most/all senior positions in certain high-skill professions (eg software engineering, music/film making, etc), leading to less overall jobs for those kinds of workers and mass unemployment. 3) The "Foom" overnight takeoff hypothesis that AI will soon be able to iteratively sustain substantial self-improvement directly yielding profound new fundamental capabilities across infinite generations with no human involvement. I've never thought that AI isn't already quite useful for some things today, or that no investors will ever make money on AI, or that AI won't displace some workers in some types of jobs, or that using AI isn't already helping accelerate the development of AI. Just that there's been a lot of hype, exaggeration and over-estimation about how much impact, how soon and how broad. There will be a few instances of rapid, large impacts but the majority of it will be slower, more gradual and less disruptive than extreme predictions - and many of the most over-the-top predictions may not ever happen. Not because they can't happen but probably for more mundane economic, logistic and human-factors reasons along the lines of why we're no closer today to the 1950s visions of a flying car in every driveway. |
| |
| ▲ | JohnMakin 19 hours ago | parent | next [-] | | Yea, this is a good article documenting how he was claiming this early on in 2024, that the models were as good as they would ever be and mostly worthless: https://www.theargumentmag.com/p/ais-biggest-critic-has-lost... | | |
| ▲ | mrandish 11 hours ago | parent [-] | | Thanks for that link. It's solidified the growing suspicion I've had that Zitron wasn't worth paying much attention to. If I'd read more than 5 or 6 of his posts I'd probably have gotten there sooner. I now place him alongside AI critics like Gary Marcus whose early intuitions seem to have hardened into an extreme and unchanging broken record instead of a more reasonably nuanced counter to the most frothy AI hype. It's sad because such extreme, over-broad views presented as absolutes save AI zealots the trouble of creating straw men of skeptical positions. It's easier to just lump all AI skeptics together with Zitron and Marcus. I guess it's time to call myself something else, like maybe "AI Realist." My skepticism around AI has always been more specifically targeted to questioning more extreme claims about the degree of impact and how soon it will be meaningfully felt across broader society. I've also tried to be clear my concerns are centered on LLMs and not AI or machine learning in general. My position regarding the long-term (5-10 yrs) has always acknowledged that LLM-based solutions will continue to improve substantially, find more real-world, meaningful use cases and that the currently unsustainable cost-to-value will eventually normalize to a sustainable equilibrium enabling profitable businesses (after some major financial pain); but, that LLMs as a technology still have some fundamental limits on what they can do which aren't separable from how they innately work. Practically, this means I doubt that LLMs, as one type of AI, can ever fully replace an experienced, highly-effective human's ability to self-develop fundamental new knowledge from novel contexts then reduce that learning to high-value abilities in applied practice and then iteratively build on that loop to discover entire new areas of knowledge which weren't even visible without the prior layer of new knowledge - and then do that over and over. I've never thought that goal is categorically impossible for AI, just that it will require a new and different approach beyond LLMs. While that new approach may incorporate LLMs as an essential component, just evolving, refining and expanding LLMs alone won't get us there. I'm encouraged that recently several top AI research luminaries have been saying similar things. |
| |
| ▲ | dualvariable 2 days ago | parent | prev | next [-] | | Yeah, I similarly doubt that LLMs are going to directly lead to AGI just via scaling and might almost be a dead end in that direction. But they're still quite useful tools and accelerators or force-multipliers. And you're still going to need humans in the loop. And I'm very worried that the capex buildout will implode once we hit diminishing returns and good-enough models can be run on substantially smaller footprints. It all isn't going away, though, and it will still continue to improve. | |
| ▲ | jcgrillo 2 days ago | parent | prev | next [-] | | But are there any viable AI products? That's, I think, the root of his claim that it won't ever be good for anything. So far I have yet to hear of a really good, successful AI product. Coding tools arguably kind of work, but that's a pretty small addressable market, and it's still quite unclear whether any of them are viable long-term commercial bets. If you can get good results with Qwen 3.6-27B and Opencode what good is an Anthropic? There are a lot of big, unanswered, foundational questions like that in this space. That's pretty alarming given the huge amounts of capital being tossed around. Commercially, I think the jury is still out on whether LLM driven AI will ever be good for anything, and it's not necessarily an unreasonable position to take given the fundamental weaknesses of the underlying technology. | | |
| ▲ | mike_hearn a day ago | parent [-] | | What are you defining as good and successful?? ChatGPT has 800M+ WAU, that seems pretty good and successful to me (not financially but they have time). AI companies aren't selling coding tools. Claude Code is not a coding tool! It's a tool that does coding, which is subtly different. The total addressable market for a coding tool is all developers, which is maybe 25-30M people worldwide, the total addressable market for people who need code written is potentially around a few billion or so, maybe more. | | |
| ▲ | jcgrillo 20 hours ago | parent [-] | | I'd like to see one of the major AI players demonstrate a successful exit. I don't think Coreweave counts here, because their long-term success is so tightly tied to the AI bubble continuing forever, which it probably won't. I want to see a strong company emerge from the bubble and start delivering real, sustainable value to its customers and investors. That would convince me it's possible to build a decent product and a real business on LLM AI technology. |
|
| |
| ▲ | dd8601fn 2 days ago | parent | prev [-] | | Yeah the dotcom crash didn’t prove that the internet was useless for business. And the housing crash didn’t mean houses don’t have value. We get hype bubbles. They’re (nearly?) always bigger than the thing they’re about, in a given time and place. It’s reasonable to think the AI hype train is one of those, to some degree or another. It’s also reasonable to see great utility in llms, now and in the future. |
|
|
| ▲ | hparadiz 2 days ago | parent | prev | next [-] |
| The economics is spending a few hundred bucks on software for an IC you're already paying over ten grand a month in order to make them more productive. How are supposedly smart industry experts not seeing this obvious fact? Are these guys actually experts? |
| |
| ▲ | Yizahi 2 days ago | parent | next [-] | | It's more of the spending potentially a thousand bucks (hypothetically - a heavy API usage by a developer utilizing top of the line agents to 100% every day, adjusted to actually be profitable) if you are paying that dev 4 to 6 grand before taxes. Now that would be a close call. | |
| ▲ | Rury a day ago | parent | prev | next [-] | | NVIDIA execs are now saying otherwise: https://fortune.com/2026/04/28/nvidia-executive-cost-of-ai-i... Maybe Ed is right even if he's wrong on some things? | |
| ▲ | xienze 2 days ago | parent | prev | next [-] | | > The economics is spending a few hundred bucks on software for an IC you're already paying over ten grand a month Let's be fair here, the endgame is not "a few hundred bucks a month." Not for how much money has been invested. How much extra you have to spend to make developers how much more productive, and will companies go along with it is the trillion dollar question. | | |
| ▲ | koliber 2 days ago | parent | next [-] | | A long time ago a vast majority of people on earth were farmers. They used relatively simple tools like scathes. Over a few centuries better tools and technology made it so that <5% of the population in rich countries are farmers. They use tools like million dollar harvesters. | | |
| ▲ | legulere 2 days ago | parent [-] | | It's not the 20x efficiency of harvesting technology compared to what agrarian societies that make them make sense. It's the productivity of the other 95% of the population that makes their labor cost so high that such expensive machines make economic sense. |
| |
| ▲ | hparadiz 2 days ago | parent | prev [-] | | You know I can just lookup the costs per seat right? It's not that much and not everyone is a heavy user at an org. And for code the costs are falling per compute cycle. | | |
| ▲ | xienze 2 days ago | parent [-] | | First, the key phrase here is "end game." Whatever you're looking at now isn't where prices will be in short order. Second, it seems a hard to believe that hundreds of billions of dollars would be spent and untold numbers of data centers would be built just to gain a measly couple hundred dollars per seat. | | |
| ▲ | fragmede a day ago | parent [-] | | But it's a lot of seats. If you get 1 billion people to pay $20/month, that's $20 billion. Multiply that by 10 years and you have $200 billion. |
|
|
| |
| ▲ | CodingJeebus 2 days ago | parent | prev [-] | | It's a few hundred bucks per month for now, but that's not going to last. At some point, the industry is going to pivot towards tracking token-based productivity because it's not going to be cheap forever unless FOSS models catch up. | | |
| ▲ | m4rtink 2 days ago | parent | next [-] | | Please don't call open weight models FOSS models - that's actually very wrong, unless you actually have all the training data and can modify the data and training methodology to retrain the model yourself. | |
| ▲ | zozbot234 2 days ago | parent | prev [-] | | FOSS models have effectively caught up wrt. scale, see e.g. the latest DeepSeek V4 series - but they still require major hardware resources (hundreds of gigabytes of RAM for a very lean deployment targeting single- or few-users inference) to run at acceptable throughput. |
|
|
|
| ▲ | cottoneyejoe 2 days ago | parent | prev | next [-] |
| His reasoning about costs are also completely flawed. API fees aren't the providers' costs. It's a largely arbitrary number that they think they can get away with based on what everyone else seems to be doing that they also expect to cover on-demand usage as well as their research, marketing, and stock buybacks. They likely have a 60-90% gross margin. |
| |
|
| ▲ | Yizahi 2 days ago | parent | prev | next [-] |
| Ed's writing style is often off-putting, repetitive and sometimes gives almost "desperate" vibes. But he does raises questions no one in the industry is seriously entertaining and exploring. What if those monsters are indeed unprofitable, now what? So while I stopped reading him regularly, I visit once a quarter just to read something not about our inevitable benevolent apocalyptic LLM gods and their Prophet St. Sam, prophesying a complete job loss and despair. This reminds me of a Bitfinexed blog situation. That guy researched and proved Tether token scam for years and he was right. But he didn't account for a tiny nuance - Tethers are useful for financial crime and are propped by that public regardless of the financial viability or rejection by every decent financial institution. Turns out you can have a hundred billion of unbacked tokens, if they are "alternatively backed" instead. I suspect LLM monsters may turn out the same way (or not). Serious question - are there any LLM bubble critics with more sane and to the point style of writing and not just posting unsubstantiated hype for views like most on YT? |
|
| ▲ | CSSer 2 days ago | parent | prev | next [-] |
| Weird, especially since a lot of us have similar opinions. Was he saying that from the start and since shifted focus to it or is it completely new? The conversation about cost isn't exactly a new one. |
|
| ▲ | alsetmusic a day ago | parent | prev [-] |
| I am sympathetic to his view because I also considered the whole AI hype train a complete scam until pretty recently. When I saw enough people validating that coding agents were actually legitimately ok and sometimes good at things, I decided to spend $50 on one to test it out. I have been pleasantly surprised at its utility knocking out grunt work. It's not super smart, but it's great at things like writing a python script to edit characteristics of a jsonl file or sorting structured data. I didn't ever expect it to be useful beyond extremely limited output and it's actually kinda good when you know how to narrowly target the tasks. The constraints of code make it a more suitable category than all the other stuff. It's still a bs hype machine with Elon saying it might save all of humanity in court today. That's pretty unlikely. |