Remix.run Logo
hyperpape 5 days ago

It's wild how far off these predictions are, and yet there are still people to take them seriously.

No matter how impressive you find current LLMs, even if you're the sort of billionaire who predicts AGI before the end of 2025[0], the mechanism that Bostrom describes in this article is completely irrelevant.

We haven't figured out how to simulate human brains in a way that could create AI and we're not anywhere close, we've just done something entirely different.

[0] Yes, I too think most of this is cynical salesmanship, not honest foolishness.

lukeschlather 5 days ago | parent | next [-]

The predictions in this paper are 100% correct. The author doesn't predict we would have ASI by now. They accurately predict that Moore's law would likely start to break down by 2012, and they also accurately predicted that EUV will allow further scaling beyond that barrier but that things will get harder. You may think LLMs are nothing like "real" AI but I'm curious what you think about the arguments in this paper and what sort of hardware is required for a "real" AI, if a "real" AI does not require hardware with in the neighborhood of 10^14 and 10^17 operations per second.

Whether or not LLMs are the correct algorithm, the hardware question is much more straightforward and that's what this paper is about.

hyperpape 5 days ago | parent [-]

The entire discussion in the software section is about simulating the brain.

> Creating superintelligence through imitating the functioning of the human brain requires two more things in addition to appropriate learning rules (and sufficiently powerful hardware): it requires having an adequate initial architecture and providing a rich flux of sensory input.

> The latter prerequisite is easily provided even with present technology. Using video cameras, microphones and tactile sensors, it is possible to ensure a steady flow of real-world information to the artificial neural network. An interactive element could be arranged by connecting the system to robot limbs and a speaker.

> Developing an adequate initial network structure is a more serious problem. It might turn out to be necessary to do a considerable amount of hand-coding in order to get the cortical architecture right. In biological organisms, the brain does not start out at birth as a homogenous tabula rasa; it has an initial structure that is coded genetically. Neuroscience cannot, at its present stage, say exactly what this structure is or how much of it needs to be preserved in a simulation that is eventually to match the cognitive competencies of a human adult. One way for it to be unexpectedly difficult to achieve human-level AI through the neural network approach would be if it turned out that the human brain relies on a colossal amount of genetic hardwiring, so that each cognitive function depends on a unique and hopelessly complicated inborn architecture, acquired over aeons in the evolutionary learning process of our species.

lukeschlather 5 days ago | parent [-]

No, it's about imitation, not simulation. The point is defining how large of a computer you would need to achieve similar performance to the human brain on "intelligence" tasks. The comparison to the human brain is because we know human brains can do these kinds of reasoning and motor tasks, so that helps us set a lower bound on how much computing power is necessary, but it doesn't presume we're going to simulate a human brain, that's just stated because it might be one way we could do it.

But still I think you're not engaging with the article properly - it doesn't say we will, it just talks about how much computing power you might need. And I think within the paper it suggests we don't have enough computing power yet, but it doesn't seem like you read deeply enough to engage with that conversation.

hyperpape 5 days ago | parent [-]

You're right to distinguish imitation from simulation. That's a good distinction and I think the paper is discussing imitation--using similar learning algorithms to what the brain uses, fed with realistic data from input devices. But my point still stands with imitation.

> This paper outlines the case for believing that we will have superhuman artificial intelligence within the first third of the next century. It looks at different estimates of the processing power of the human brain; how long it will take until computer hardware achieve a similar performance; ways of creating the software through bottom-up approaches like the one used by biological brains; how difficult it will be for neuroscience figure out enough about how brains work to make this approach work; and how fast we can expect superintelligence to be developed once there is human-level artificial intelligence.

The paper very clearly suggests an estimate of the required hardware power for a particular strategy of imitating the brain. And it very clearly predicts we will achieve superintelligence by 2033.

If that strategy is a non-starter, which it is for the foreseeable future, then the hardware estimate is irrelevant, because the strategies we have available to us may require orders of magnitude more computing power (or even may simply fail to work with any amount of computing power).

zombiwoof 5 days ago | parent | prev [-]

Disregarding the blowhard Eric Schmidt nobody is close to understanding how spirit/soul work let alone the brain. It isn’t just neural weights and connections