Remix.run Logo
mrandish 2 days ago

I've only read a few of his pieces here and there and had just assumed he was an AI skeptic, so I never thought his position was LLMs would never be good for anything at any price. That's a pretty extreme thing for any serious person to have ever claimed. Frankly, it seems more like a straw man exaggeration of AI skepticism. I consider myself to generally be an AI skeptic, but to me that means skepticism about:

1) Nearer-term investment returns on AI businesses and data center build-outs.

2) Claims that LLMs are now (or soon will) rapidly displace most/all senior positions in certain high-skill professions (eg software engineering, music/film making, etc), leading to less overall jobs for those kinds of workers and mass unemployment.

3) The "Foom" overnight takeoff hypothesis that AI will soon be able to iteratively sustain substantial self-improvement directly yielding profound new fundamental capabilities across infinite generations with no human involvement.

I've never thought that AI isn't already quite useful for some things today, or that no investors will ever make money on AI, or that AI won't displace some workers in some types of jobs, or that using AI isn't already helping accelerate the development of AI. Just that there's been a lot of hype, exaggeration and over-estimation about how much impact, how soon and how broad. There will be a few instances of rapid, large impacts but the majority of it will be slower, more gradual and less disruptive than extreme predictions - and many of the most over-the-top predictions may not ever happen. Not because they can't happen but probably for more mundane economic, logistic and human-factors reasons along the lines of why we're no closer today to the 1950s visions of a flying car in every driveway.

JohnMakin 20 hours ago | parent | next [-]

Yea, this is a good article documenting how he was claiming this early on in 2024, that the models were as good as they would ever be and mostly worthless:

https://www.theargumentmag.com/p/ais-biggest-critic-has-lost...

mrandish 13 hours ago | parent [-]

Thanks for that link. It's solidified the growing suspicion I've had that Zitron wasn't worth paying much attention to. If I'd read more than 5 or 6 of his posts I'd probably have gotten there sooner. I now place him alongside AI critics like Gary Marcus whose early intuitions seem to have hardened into an extreme and unchanging broken record instead of a more reasonably nuanced counter to the most frothy AI hype.

It's sad because such extreme, over-broad views presented as absolutes save AI zealots the trouble of creating straw men of skeptical positions. It's easier to just lump all AI skeptics together with Zitron and Marcus. I guess it's time to call myself something else, like maybe "AI Realist." My skepticism around AI has always been more specifically targeted to questioning more extreme claims about the degree of impact and how soon it will be meaningfully felt across broader society. I've also tried to be clear my concerns are centered on LLMs and not AI or machine learning in general.

My position regarding the long-term (5-10 yrs) has always acknowledged that LLM-based solutions will continue to improve substantially, find more real-world, meaningful use cases and that the currently unsustainable cost-to-value will eventually normalize to a sustainable equilibrium enabling profitable businesses (after some major financial pain); but, that LLMs as a technology still have some fundamental limits on what they can do which aren't separable from how they innately work. Practically, this means I doubt that LLMs, as one type of AI, can ever fully replace an experienced, highly-effective human's ability to self-develop fundamental new knowledge from novel contexts then reduce that learning to high-value abilities in applied practice and then iteratively build on that loop to discover entire new areas of knowledge which weren't even visible without the prior layer of new knowledge - and then do that over and over. I've never thought that goal is categorically impossible for AI, just that it will require a new and different approach beyond LLMs. While that new approach may incorporate LLMs as an essential component, just evolving, refining and expanding LLMs alone won't get us there. I'm encouraged that recently several top AI research luminaries have been saying similar things.

dualvariable 2 days ago | parent | prev | next [-]

Yeah, I similarly doubt that LLMs are going to directly lead to AGI just via scaling and might almost be a dead end in that direction.

But they're still quite useful tools and accelerators or force-multipliers.

And you're still going to need humans in the loop.

And I'm very worried that the capex buildout will implode once we hit diminishing returns and good-enough models can be run on substantially smaller footprints.

It all isn't going away, though, and it will still continue to improve.

jcgrillo 2 days ago | parent | prev | next [-]

But are there any viable AI products? That's, I think, the root of his claim that it won't ever be good for anything. So far I have yet to hear of a really good, successful AI product. Coding tools arguably kind of work, but that's a pretty small addressable market, and it's still quite unclear whether any of them are viable long-term commercial bets. If you can get good results with Qwen 3.6-27B and Opencode what good is an Anthropic? There are a lot of big, unanswered, foundational questions like that in this space. That's pretty alarming given the huge amounts of capital being tossed around. Commercially, I think the jury is still out on whether LLM driven AI will ever be good for anything, and it's not necessarily an unreasonable position to take given the fundamental weaknesses of the underlying technology.

mike_hearn a day ago | parent [-]

What are you defining as good and successful?? ChatGPT has 800M+ WAU, that seems pretty good and successful to me (not financially but they have time).

AI companies aren't selling coding tools. Claude Code is not a coding tool! It's a tool that does coding, which is subtly different. The total addressable market for a coding tool is all developers, which is maybe 25-30M people worldwide, the total addressable market for people who need code written is potentially around a few billion or so, maybe more.

jcgrillo a day ago | parent [-]

I'd like to see one of the major AI players demonstrate a successful exit. I don't think Coreweave counts here, because their long-term success is so tightly tied to the AI bubble continuing forever, which it probably won't. I want to see a strong company emerge from the bubble and start delivering real, sustainable value to its customers and investors. That would convince me it's possible to build a decent product and a real business on LLM AI technology.

dd8601fn 2 days ago | parent | prev [-]

Yeah the dotcom crash didn’t prove that the internet was useless for business. And the housing crash didn’t mean houses don’t have value.

We get hype bubbles. They’re (nearly?) always bigger than the thing they’re about, in a given time and place.

It’s reasonable to think the AI hype train is one of those, to some degree or another. It’s also reasonable to see great utility in llms, now and in the future.