Remix.run Logo
LegionMammal978 5 days ago

I think the usual counterargument to the strong form is, "So you're saying that not even an AI with a computer the size of Jupiter (or whatever) could run circles around the best humans? Nonsense!" Sometimes with some justification along the lines of, "Evolution doesn't select for as much intelligence as possible, so the sky's the limit relative to humans!" And as to inherently hard problems, "A smart AI will just simplify its environment until it's manageable!"

But these don't really address the near-term question of "What if growth in AI capabilities continues, but becomes greatly sub-exponential in terms of resources spent?", which would put a huge damper on all the "AI takeoff" scenarios. Many strong believers seem to think "a constant rate of relative growth" is so intuitive as to be unquestionable.

logicchains 5 days ago | parent [-]

>Many strong believers seem to think "a constant rate of relative growth" is so intuitive as to be unquestionable.

Because they never give a rigorous definition of intelligence. The most rigorous definition in psychology is the G factor, which correlates with IQ and the ability to solve various tasks well, and which empirically shows diminishing returns in terms of productivity.

A more general definition is "the relative ability to solve problems (and relative speed at solving them)". Attempting to model this mathematically inevitably leads into theoretical computer science and computational complexity, because that's the field that tries to classify problems and their difficulty. But computational complexity theory shows that only a small class of the problems we can model achieve linear benefit from a linear increase in computing power, and of the problems we can't model, we have no reason to believe they mostly fall in this category. Whereas believers implicitly assume that the vast majority of problems fall into that category.