| ▲ | Sharlin 3 hours ago |
| I think you’ll find the essay much more nuanced than that. It only incidentally discusses what you’re thinking about. > Think of AI software generation: there are plenty of coders who love using AI. Using AI for simple tasks can genuinely make them more efficient and give them more time to do the fun part of coding, namely, solving really gnarly, abstract puzzles. But when you listen to business leaders talk about their AI plans for coders, it’s clear they are not hoping to make some centaurs. |
|
| ▲ | kalkin 3 hours ago | parent | next [-] |
| The article does a pretty lazy* job of defending its assumption that "solving really gnarly, abstract puzzles" is going to remain beyond AI capabilities indefinitely, but that is a load-bearing part of the argument and Doctorow does try to substantiate it by dismissing LLMs as next-word predictors. This is a description which is roughly accurate at some level of reduction but has not helped anyone predict the last three years of advances and so seems pretty unlikely to be a helpful guide to the next three years. The other argument Doctorow gives for the limits of LLMs is the example of typo-squatting. This isn't an attack that's new to LLMs and, while I don't know if anyone has done a study, I suspect it's already the case in January 2026 that a frontier model is no more susceptible to this than the median human, or perhaps less; certainly in general Claude is less likely to make a typo than I am. There are categories of mistakes it's still more likely to make than me, but the example here is already looking out of date, which isn't promising for the wider argument. *to be fair, it's clearly not aimed at a technical audience. |
|
| ▲ | whimsicalism 3 hours ago | parent | prev [-] |
| I disagree. The article leads with the sentiment that I mention and has it woven throughout. The theme is AI is obviously not capable of doing your job, the problem is that the stupid managerial class will get convinced it is and make things shitty. > This is another key to understanding – and thus deflating – the AI bubble. The AI can’t do your job, but an AI salesman can convince your boss to fire you and replace you with an AI that can’t do your job. > Now, AI is a statistical inference engine. All it can do is predict what word will come next based on all the words that have been typed in the past. That means that it will “hallucinate” a library called lib.pdf.text.parsing, I think it is a convenient, palatable, and obviously comforting lie that lots of people right now are telling themselves. To me, all the ‘nuance’ in this article is just because the coyote in Doctorow has begun looking down but still cannot quite believe it. He is still leaning on the same tropes of statistical autocomplete that have been a mainstay of the fingers-in-ears gang for the last 3 years. |
| |
| ▲ | alpha_squared 3 hours ago | parent [-] | | You're in half the comment replies with a confrontational tone and, at times, quite aggressively. It does not feel as though you're sincerely engaging, but instead have an ideological world view that makes it difficult to reconcile different perspectives. I'm working directly with these tools and have several colleagues who do as well. Our collective anecdotal experience keeps coming back to the conclusion that the tech just isn't where the marketing is on its capabilities. There's probably some value in the tech here, which leads others like yourself to be so completely sold on it, but it's just not materializing that much in my day-to-day outside of creating the most basic code/scaffolding where I then have to go back and fix/correct because there are subtle errors. It's actually hard to tell if my productivity is better because I have to spend time fixing the generated output. Maybe it would help to recognize that your experience is not the norm. And if the tech were there, where are the actual profits from selling it? It's increasingly more common for it to be "under development" for selling to consumers or only deployed as a chatbot in scenarios where it's acceptable to be wrong and warnings to verify output yourself. | | |
| ▲ | whimsicalism 3 hours ago | parent | next [-] | | I’m replying to the people replying to me, which is hopefully permissible? I will respond aggressively to people who say that my work must not be very important or hard if I feel that AI can do a considerable portion of my day to day because I feel like that is initiating rudeness and find that the HN tendency to talk down to people expressing this opinion is chilling important conversations. If my other replies come off as aggro, I apologize - I definitely can struggle with moderating tone in comments to reflect how I actually feel. > Our collective anecdotal experience keeps coming back to the conclusion that the tech just isn't where the marketing is on its capabilities. There's probably some value in the tech here, which leads others like yourself to be so completely sold on it Let me be clear - I am not so completely sold on the current iteration. But I think there has been a significant improvement even since the midpoint of last year, the number of diffs I am returning mostly unedited is sharply increasing, and many people I am talking to are privately telling me they are no longer authoring any code themselves except for minor edits in diffs. Given that this has only been 3 years since chatgpt, really I am just looking at the curve and saying ‘woah.’ | |
| ▲ | kalkin 2 hours ago | parent | prev [-] | | I don't think the commenter to whom you're replying is any more aggressive than, e.g., this one: https://news.ycombinator.com/item?id=46668988 It's unfortunately the case that even understanding what AI can and cannot do has become a matter of, as you say, "ideological world view". Ideally we'd be able to discuss what's factually true of AI at the beginning of 2026, and what's likely to be true within the next few years, independently of whether the trends are good for most humans or what we ought to do about them. In practice that's become pretty difficult, and the article to which we're all responding does not contribute positively. |
|
|