| ▲ | kalkin 3 hours ago | |
The article does a pretty lazy* job of defending its assumption that "solving really gnarly, abstract puzzles" is going to remain beyond AI capabilities indefinitely, but that is a load-bearing part of the argument and Doctorow does try to substantiate it by dismissing LLMs as next-word predictors. This is a description which is roughly accurate at some level of reduction but has not helped anyone predict the last three years of advances and so seems pretty unlikely to be a helpful guide to the next three years. The other argument Doctorow gives for the limits of LLMs is the example of typo-squatting. This isn't an attack that's new to LLMs and, while I don't know if anyone has done a study, I suspect it's already the case in January 2026 that a frontier model is no more susceptible to this than the median human, or perhaps less; certainly in general Claude is less likely to make a typo than I am. There are categories of mistakes it's still more likely to make than me, but the example here is already looking out of date, which isn't promising for the wider argument. *to be fair, it's clearly not aimed at a technical audience. | ||