Remix.run Logo
moab a day ago

> "OpenBrain (the leading US AI project) builds AI agents that are good enough to dramatically accelerate their research. The humans, who up until very recently had been the best AI researchers on the planet, sit back and watch the AIs do their jobs, making better and better AI systems."

I'm not sure what gives the authors the confidence to predict such statements. Wishful thinking? Worst-case paranoia? I agree that such an outcome is possible, but on 2--3 year timelines? This would imply that the approach everyone is taking right now is the right approach and that there are no hidden conceptual roadblocks to achieving AGI/superintelligence from DFS-ing down this path.

All of the predictions seem to ignore the possibility of such barriers, or at most acknowledge the possibility but wave it away by appealing to the army of AI researchers and industry funding being allocated to this problem. IMO it is the onus of the proposers of such timelines to argue why there are no such barriers and that we will see predictable scaling in the 2--3 year horizon.

throwawaylolllm a day ago | parent | next [-]

It's my belief (and I'm far from the only person who thinks this) that many AI optimists are motivated by an essentially religious belief that you could call Singularitarianism. So "wishful thinking" would be one answer. This document would then be the rough equivalent of a Christian fundamentalist outlining, on the basis of tangentially related news stories, how the Second Coming will come to pass in the next few years.

viccis a day ago | parent | next [-]

Crackpot millenarians have always been a thing. This crop of them is just particularly lame and hellbent on boiling the oceans to get their eschatological outcome.

ivm 19 hours ago | parent | prev | next [-]

Spot on, see the 2017 article "God in the machine: my strange journey into transhumanism" about that dynamic:

https://www.theguardian.com/technology/2017/apr/18/god-in-th...

pixl97 a day ago | parent | prev | next [-]

Eh, not sure if the second coming is a great analogy. That wholly depends on the whims of a fictional entity performing some unlikely actions.

Instead think of them saying a crusade occurring in the next few years. When the group saying the crusade is coming is spending billions of dollars to trying to make just that occur you no longer have the ability to say it's not going to happen. You are now forced to examine the risks of their actions.

spacephysics 11 hours ago | parent | prev [-]

Reminds me of Fallout's Children of Atom "Church of the Children of Atom"

Maybe we'll see "Church of the Children of Altman" /s

It seems without a framework of ethics/morality (insert XYZ religion), us humans find one to grasp onto. Be it a cult, a set of not-so-fleshed-out ideas/philosophies etc.

People who say they aren't religious per-se, seem to have some set of beliefs that amount to religion. Just depends who or what you look towards for those beliefs, many of which seem to be half-hazard.

People I may disagree with the most, many times at least have a realization of what ideas/beliefs are unifying their structure of reality, with others just not aware.

A small minority of people can rely on schools of philosophical thought, and 'try on' or play with different ideas, but have a self-reflection that allows them to see when they transgress from ABC philosophy or when the philosophy doesn't match with their identity to a degree.

barbarr a day ago | parent | prev | next [-]

It also ignores the possibility of plateau... maybe there's a maximum amount of intelligence that matter can support, and it doesn't scale up with copies or speed.

AlexandrB a day ago | parent | next [-]

Or scales sub-linearly with hardware. When you're in the rising portion of an S-curve[1] you can't tell how much longer it will go on before plateauing.

A lot of this resembles post-war futurism that assumed we would all be flying around in spaceships and personal flying cars within a decade. Unfortunately the rapid pace of transportation innovation slowed due to physical and cost constraints and we've made little progress (beyond cost optimization) since.

[1] https://en.wikipedia.org/wiki/Sigmoid_function

Tossrock a day ago | parent [-]

The fact that it scales sub linearly with hardware is well known and in fact foundational to the scaling laws on which modern LLMs are built, ie performance scales remarkably closely to log(compute+data), over many orders of magnitude.

pixl97 a day ago | parent | prev [-]

Eh, these mathematics still don't work out in humans favor...

Lets say intelligence caps out at the maximum smartest person that's ever lived. Well, the first thing we'd attempt to do is build machines up to that limit that 99.99999 percent of us would never get close to. Moreso the thinking parts of humans is only around 2 pounds of mush in side of our heads. On top of that you don't have to grow them for 18 years first before they start outputting something useful. That and they won't need sleep. Oh and you can feed them with solar panels. And they won't be getting distracted by that super sleek server rack across the aisle.

We do know 'hive' or societal intelligence does scale over time especially with integration with tooling. The amount of knowledge we have and the means of which we can apply it simply dwarf previous generations.

ddp26 21 hours ago | parent | prev | next [-]

Check out the Timelines Forecast under "research". They model this very carefully.

(They could be wrong, but this isn't a guess, it's a well-researched forecast.)

MrScruff 16 hours ago | parent | prev [-]

I would assume this comes from having faith in the overall exponential trend rather than getting that much into the weeds of how this will come about. I can sort of see why you might think that way - everyone was talking about hitting a wall with brute force scaling and then inference time scaling comes along to keep things progressing. I wouldn't be quite as confident personally and as have many have said before, a sigmoid looks like an exponential in it's initial phase.