Remix.run Logo
ctoth 5 days ago

What if this paper actually took things seriously?

A serious paper would start by acknowledging that every previous general-purpose technology required human oversight precisely because it couldn't perceive context, make decisions, or correct errors - capabilities that are AI's core value proposition. It would wrestle with the fundamental tension: if AI remains error-prone enough to need human supervisors, it's not transformative; if it becomes reliable enough to be transformative, those supervisory roles evaporate.

These two Princeton computer scientists, however, just spent 50 pages arguing that AI is like electricity while somehow missing that electricity never learned to fix itself, manage itself, or improve itself - which is literally the entire damn point. They're treating "humans will supervise the machines" as an iron law of economics rather than a temporary bug in the automation process that every profit-maximizing firm is racing to patch. Sometimes I feel like I'm losing my mind when it's obvious that GPT-5 could do better than Narayanan and Kapoor did in their paper at understanding historical analogies.

nottorp 5 days ago | parent | next [-]

> because it couldn't perceive context, make decisions, or correct errors - capabilities that are AI's core value proposition

I could ask the same thing then. When will you take "AI" seriously and stop attributing the above capabilities to it?

simonh 5 days ago | parent | prev | next [-]

LLMs do have to be supervised by humans and do not perceive context or correct errors, and it’s not at all clear this is going to change any time soon. In fact it’s plausible that this is due to basic problems with the current technology. So if you’re right, sure, but I’m certainly not taking that as a given.

cubefox 4 days ago | parent [-]

They do already correct errors since OpenAI introduced its o1 model. Since then the improvements have been significant. It seems practically certain that their capabilities will keep growing rapidly. Do you think AI will suddenly stagnate such that models are not much more capable in five years than they are now? That would be absurd. Look back five years, and we are practically in the AI stone age.

cubefox 4 days ago | parent | prev [-]

Exactly. People seem to want to underhype AI. It's like a chimpanzee saying: humans are just normal apes.

Delusional.