Remix.run Logo
logicchains 5 days ago

These kind of predictions never address the fact that empirically speaking there's diminishing returns to intelligence. IQ only correlates with income up to a point, after which the correlation breaks: https://www.sciencedaily.com/releases/2023/02/230208125113.h... . Similarly the most politically powerful and influential people are generally not those at the top of the IQ scale.

And that matches what we expect theoretically: of the difficult problems we can model mathematically, the vast majority benefit sub-linearly from a linear increase in processing power. And of the processes we can model in the physical world, many are chaotic in the formal sense, in that a linear increase in processing power provides a sublinear increase in the distance ahead in time that we can simulate. Such computational complexity results are set in stone, i.e. no amount of hand-wavy "superintelligence" could sort an array of arbitrary comparables in O(log(n)) time, any more than it could make 1+1=3.

LegionMammal978 5 days ago | parent | next [-]

I think the usual counterargument to the strong form is, "So you're saying that not even an AI with a computer the size of Jupiter (or whatever) could run circles around the best humans? Nonsense!" Sometimes with some justification along the lines of, "Evolution doesn't select for as much intelligence as possible, so the sky's the limit relative to humans!" And as to inherently hard problems, "A smart AI will just simplify its environment until it's manageable!"

But these don't really address the near-term question of "What if growth in AI capabilities continues, but becomes greatly sub-exponential in terms of resources spent?", which would put a huge damper on all the "AI takeoff" scenarios. Many strong believers seem to think "a constant rate of relative growth" is so intuitive as to be unquestionable.

logicchains 5 days ago | parent [-]

>Many strong believers seem to think "a constant rate of relative growth" is so intuitive as to be unquestionable.

Because they never give a rigorous definition of intelligence. The most rigorous definition in psychology is the G factor, which correlates with IQ and the ability to solve various tasks well, and which empirically shows diminishing returns in terms of productivity.

A more general definition is "the relative ability to solve problems (and relative speed at solving them)". Attempting to model this mathematically inevitably leads into theoretical computer science and computational complexity, because that's the field that tries to classify problems and their difficulty. But computational complexity theory shows that only a small class of the problems we can model achieve linear benefit from a linear increase in computing power, and of the problems we can't model, we have no reason to believe they mostly fall in this category. Whereas believers implicitly assume that the vast majority of problems fall into that category.

Natsu 5 days ago | parent | prev | next [-]

That finding is probably not reliable because of the way they do binning:

https://www.cremieux.xyz/p/brief-data-post?open=false#%C2%A7...

TheOtherHobbes 5 days ago | parent | prev [-]

IQ is mostly a measure of processing speed and memory, with some educational bias that's hard to filter out.

You don't get useful intelligence unless the software is also fit for purpose. Slow hardware can still outperform broken software.

Social status depends on factors like good looks, charm, connections, and general chutzpah, often with more or less overt hints of narcissism. That's an orthogonal set of skills to being able to do tensor calculus.

As for an impending AI singularity - no one has the first clue what the limits are. We like to believe in gods, and we love stories about god-like superpowers. But there are all kinds of issues which could prevent a true singularity - from stability constraints on a hypercomplex recursive system, to resource constraints, to physical limits we haven't encountered yet.

Even if none of those are a problem, for all we know an ASI may decide we're an irrelevance and just... disappear.

logicchains 5 days ago | parent [-]

>As for an impending AI singularity - no one has the first clue what the limits are.

That's simply untrue. Theoretical computer scientists understand the lower bounds limits of many classes of problems. And that for many problems, it's mathematically impossible to significantly improve performance in them with only a linear increase in computing power, regardless of the algorithm/brain/intelligence. Many problems would even not benefit much from a superlinear increase in computing power, because of the nature of exponential growth. For a chaotic system in the mathematical sense, where prediction grows exponentially harder with time, even exactly predicting one minute ahead could require more compute than could be provided by turning the entire known universe into a computer.