| ▲ | whimsicalism 3 hours ago | |||||||
You can read my reply to another comment making a similar point. In short, I think you are giving Doctorow far too much credit - the assumption that these tools are fundamentally incapable is woven throughout the essay, the risk always comes from the fact that managers might think these tools (which are obviously inferior) can do your job. The notion that they can actually do your job is treated as invariable absurd, pie-in-the-sky, bubble thinking, or unmentionable. My point is I don’t think a technology that went from chatgpt (cool, useless) to opus-4.5+ in 3 years is obviously being oversold when it says that it can do your entire job beyond being just a useful tool. | ||||||||
| ▲ | happy_dog1 3 hours ago | parent | next [-] | |||||||
I think we have to be careful when assuming that model capabilities will continue to grow at the same rate they have grown in recent years. It is very well-documented their growth in recent years has been accompanied by an exponential increase in the cost of building these models, see for example (of many examples) [1]. These costs include not just the cost of GPUs but also the cost for reinforcement learning from human feedback (RLHF), which is not cheap either -- there is a reason that SurgeAI has over $1 billion in annual revenue (and ScaleAI was doing quite well before they were purchased by Meta) [2]. Maybe model capabilities WILL continue to improve rapidly for years to come, in which case, yes, at some point it will be possible to replace most or all white collar workers. In that case you are probably correct. The other possibility is that capabilities will plateau at or not far above current levels because squeezing out further performance improvements simply becomes too expensive. In that case Cory Doctorow's argument seems sound. Currently all of these tools need human oversight to work well, and if a human is being paid to review everything generated by the AI, as Doctorow points out, they are effectively functioning as an accountability sink (we blame you when the AI screws up, have fun.) I think it's worth bearing in mind that Geoffrey Hinton (infamously) predicted ten years ago that radiologists would all be out of a job in five years, when in fact demand for radiology has increased. He probably based this on some simple extrapolation from the rapid progress in image classification in the early 2010s. If image classification capabilities had continued to improve at that rate, he would probably have been correct. [1] https://arxiv.org/html/2405.21015v1 [2] https://en.wikipedia.org/wiki/Surge_AI | ||||||||
| ▲ | roxolotl 3 hours ago | parent | prev [-] | |||||||
But Corey isn’t saying it’s oversold he’s saying the value capture by a few companies enabled by AI is dangerous to society. | ||||||||
| ||||||||