Remix.run Logo
kuerbel 3 hours ago

I think this comment is reacting to a different argument than the one the article is actually making.

The piece isn’t claiming that AI tools are useless or that they don’t materially improve day-to-day work. In fact, it more or less assumes the opposite. The critique is about the economic and organizational story being told around AI, not about whether an individual developer can ship faster today.

Saying “these tools now do a considerable portion of my work” operates on the micro level of personal productivity. Doctorow is operating on the macro level: how firms reframe human labor as “automation,” push humans into oversight and liability roles, and use exaggerated autonomy claims to justify valuations, layoffs, and cost-cutting.

Ironically, the “Wile E. Coyote running off a cliff” metaphor aligns more with the article than against it. The whole “reverse centaur” idea is that jobs don’t disappear instantly; they degrade first. People keep running because the system still sort of works, until the ground is gone and the responsibility snaps back onto humans.

So there’s no contradiction between “this saves me hours a day” and “this is being oversold in ways that will destabilize jobs and business models.” Those two things can be true at the same time. The comment seems to rebut “AI doesn’t work,” which isn’t really the claim being made.

whimsicalism 3 hours ago | parent | next [-]

You can read my reply to another comment making a similar point. In short, I think you are giving Doctorow far too much credit - the assumption that these tools are fundamentally incapable is woven throughout the essay, the risk always comes from the fact that managers might think these tools (which are obviously inferior) can do your job. The notion that they can actually do your job is treated as invariable absurd, pie-in-the-sky, bubble thinking, or unmentionable.

My point is I don’t think a technology that went from chatgpt (cool, useless) to opus-4.5+ in 3 years is obviously being oversold when it says that it can do your entire job beyond being just a useful tool.

happy_dog1 3 hours ago | parent | next [-]

I think we have to be careful when assuming that model capabilities will continue to grow at the same rate they have grown in recent years. It is very well-documented their growth in recent years has been accompanied by an exponential increase in the cost of building these models, see for example (of many examples) [1]. These costs include not just the cost of GPUs but also the cost for reinforcement learning from human feedback (RLHF), which is not cheap either -- there is a reason that SurgeAI has over $1 billion in annual revenue (and ScaleAI was doing quite well before they were purchased by Meta) [2].

Maybe model capabilities WILL continue to improve rapidly for years to come, in which case, yes, at some point it will be possible to replace most or all white collar workers. In that case you are probably correct.

The other possibility is that capabilities will plateau at or not far above current levels because squeezing out further performance improvements simply becomes too expensive. In that case Cory Doctorow's argument seems sound. Currently all of these tools need human oversight to work well, and if a human is being paid to review everything generated by the AI, as Doctorow points out, they are effectively functioning as an accountability sink (we blame you when the AI screws up, have fun.)

I think it's worth bearing in mind that Geoffrey Hinton (infamously) predicted ten years ago that radiologists would all be out of a job in five years, when in fact demand for radiology has increased. He probably based this on some simple extrapolation from the rapid progress in image classification in the early 2010s. If image classification capabilities had continued to improve at that rate, he would probably have been correct.

[1] https://arxiv.org/html/2405.21015v1 [2] https://en.wikipedia.org/wiki/Surge_AI

roxolotl 3 hours ago | parent | prev [-]

But Corey isn’t saying it’s oversold he’s saying the value capture by a few companies enabled by AI is dangerous to society.

whimsicalism 3 hours ago | parent [-]

I do not agree with your reading of the article. The premise - both implicit and stated explicitly throughout the article - is that companies are hyping this up because they want to be seen as growing, that this technology cannot do your job, that these are statistical tools foolishly being used to replace real workers. Look at the bits I quote in my other comment.

I would have been much more interested in reading the article you’re suggesting.

artninja1988 2 hours ago | parent | prev | next [-]

> The piece isn’t claiming that AI tools are useless or that they don’t materially improve day-to-day work

Would you call something that could replace your labor "spicy auto complete"? He also evokes nfts and blockchain, for some reason. To me this phrasing makes it sound like he thinks they are damn near useless.

sodapopcan 3 hours ago | parent | prev [-]

> I think this comment is reacting to a different argument than the one the article is actually making.

The headline.

bee_rider 3 hours ago | parent | next [-]

We’re supposed to pretend people read articles instead of just the headline (it is in the posting guidelines). To play along with that rule, people will write as if the poster they are responding to missed some nuance of the article.

whimsicalism 2 hours ago | parent [-]

I like that you all are having your own little side conversation making fun of me without engaging at all on the substance.

sodapopcan 2 hours ago | parent | next [-]

This isn't a conversation, lol.

I don't have much to offer here (and yes, sorry, after I made my snarky remark I realize you had indeed read the article). I recognize AI's capabilities but mainly don't use it primarily for political reasons but also because I just enjoy writing code. I'll sometimes use up the chatgpt free limit using it as a somewhat better search engine (and it's not always better) but there's no way I'm paying for agents, which is everything to do with where the money is going, not the money itself. Of course there are other reasons outside of how AI is used by programmers that would derail the general theme of these threads.

I'm just drawn to these threads for the drama and sometimes it triggers me and I write a snarky throwaway comment. If the discussions, and particularly the companies themselves, could shift to actual societal good it can do and how it is concretely getting there, that would hold my attention. Instead we get Sona etc.

bee_rider 2 hours ago | parent | prev | next [-]

That’s fair.

I was accepting sodapopcan’s premise while responding to them. My joke was intended to be aimed at the posting guides and these little Hackernews traditions. But, it was a bit dismissive toward you, which is a little rude. Sorry.

2 hours ago | parent | prev [-]
[deleted]
zdragnar 3 hours ago | parent | prev [-]

Ah yes, the notoriously accurate headline.