| ▲ | JohnMakin 20 hours ago | |
Yea, this is a good article documenting how he was claiming this early on in 2024, that the models were as good as they would ever be and mostly worthless: https://www.theargumentmag.com/p/ais-biggest-critic-has-lost... | ||
| ▲ | mrandish 13 hours ago | parent [-] | |
Thanks for that link. It's solidified the growing suspicion I've had that Zitron wasn't worth paying much attention to. If I'd read more than 5 or 6 of his posts I'd probably have gotten there sooner. I now place him alongside AI critics like Gary Marcus whose early intuitions seem to have hardened into an extreme and unchanging broken record instead of a more reasonably nuanced counter to the most frothy AI hype. It's sad because such extreme, over-broad views presented as absolutes save AI zealots the trouble of creating straw men of skeptical positions. It's easier to just lump all AI skeptics together with Zitron and Marcus. I guess it's time to call myself something else, like maybe "AI Realist." My skepticism around AI has always been more specifically targeted to questioning more extreme claims about the degree of impact and how soon it will be meaningfully felt across broader society. I've also tried to be clear my concerns are centered on LLMs and not AI or machine learning in general. My position regarding the long-term (5-10 yrs) has always acknowledged that LLM-based solutions will continue to improve substantially, find more real-world, meaningful use cases and that the currently unsustainable cost-to-value will eventually normalize to a sustainable equilibrium enabling profitable businesses (after some major financial pain); but, that LLMs as a technology still have some fundamental limits on what they can do which aren't separable from how they innately work. Practically, this means I doubt that LLMs, as one type of AI, can ever fully replace an experienced, highly-effective human's ability to self-develop fundamental new knowledge from novel contexts then reduce that learning to high-value abilities in applied practice and then iteratively build on that loop to discover entire new areas of knowledge which weren't even visible without the prior layer of new knowledge - and then do that over and over. I've never thought that goal is categorically impossible for AI, just that it will require a new and different approach beyond LLMs. While that new approach may incorporate LLMs as an essential component, just evolving, refining and expanding LLMs alone won't get us there. I'm encouraged that recently several top AI research luminaries have been saying similar things. | ||