Remix.run Logo
lukeschlather 5 days ago

This is a really good overview, and it seems remarkably not needing much modification after several decades, at least in terms of the facts and things it predicts everything has happened as the author says. I do want to pick at some of the numbers in the upper bound because obviously we're getting close to the end of the first third of the century and we don't have ASI yet even though we have roughly hit the upper bound the author defines.

> Since a signal is transmitted along a synapse, on average, with a frequency of about 100 Hz and since its memory capacity is probably less than 100 bytes (1 byte looks like a more reasonable estimate)

I admit my feeling is that neurons/synapses probably have less than 100 bytes of memory, and also that a byte or less is more plausible, but I would like to see some more rigorous proof that they can't possibly have more than a gigabyte of memory that the synapse/neuron can access at the speed of computation.

The author has a note where they handwave away the possibility that chemical processes could meaningfully increase the operations per second, and I'm comfortable with that, but this point:

> Perhaps a more serious point is that that neurons often have rather complex time-integration properties

Seems more interesting. Especially in the context of if there's dramatically more storage available in neurons/synapses. If a neuron can do maybe some operations per minute over 1GB of data per synapse, for example. (Which sounds absurdly high, but just for the sake of argument.)

And I think putting some absurdly generous upper bounds in might be helpful since, we're clearly past the 100TOPs, asking, like, how many H100s would you need if we made some absurd suppositions about the capacity of human synapses and neurons? It seems like, we probably have enough. But also I think you could make a case some of the largest supercomputing clusters are the only things that can actually match the upper bound for the capacity of a single human brain.

Although I think someone might be able to convince me that a manageable cluster of H100s already meets the most generous possible upper bound.

kelseyfrog 5 days ago | parent | next [-]

A 5090 has a peak theoretical limit of GenAI 3356TOPS. So we're "already" an order of magnitude greater than what was considered enough for AGI. One question is, "What happened here?" Was the original estimate wrong? Have we notfound the "right" algorithm yet? Something else?

lukeschlather 5 days ago | parent | next [-]

"We haven't found the 'right' algorithm yet." seems like the obvious answer, but the numbers in the paper all make sense and I'm interested in some more exotic explanations why it could actually be some orders of magnitude more than a 5090.

Although that's not looking at memory, and I am also interested in some explanation of what... a 5090 has 32GB which, a human brain has more like a petabyte of memory assuming 1 byte/synapse. Which is to say 1 million GB in which case even a large cluster of H100s has an absurd amount of TOPS but nowhere near enough high-speed memory.

nvch 5 days ago | parent | prev | next [-]

We are constantly learning (updating the network) in addition to inference. Quite possibly that our brains allocate more resources to learning than to inference.

Perhaps AI companies don’t know how to run continuous learning on their models:

* it’s unrealistic to do it for one big model because it will instantly start shifting in an unknown direction

* they can’t make millions of clones of their model, run them separately and set them free like it happens with humans

Mars008 4 days ago | parent [-]

> possibly that our brains allocate more resources to learning than to inference

It's likely in brain inference is learning. If you want a technical analog it's like a conversation in LLMs. Previous tokens do affect the currently generated. I.e. it's inference time learning, well known and widely used.

SoftTalker 5 days ago | parent | prev [-]

Nature needed 3.5 billion years to work it out, and we're going to solve it in a few decades?

kelseyfrog 5 days ago | parent | next [-]

It depends on where we draw the starting line. We're already at parity with 3.5BYA to 541Mya because no neurons existed in that duration. Only more recently, in the Cambrian, do we have evidence that voltage gated potassium signaling evolved[1].

That changes the calculus likely very little, but it feels more accurate.

1. https://www.cell.com/current-biology/pdf/S0960-9822(16)30489...

mathgeek 5 days ago | parent | prev | next [-]

I know it’s a silly question to begin with, but if you analyze it seriously, you’d want to at most compare human intelligence->superintelligence with the 20 million years between the first homidinae and homo (and even that is probably too large for some folks to compare with).

One could even argue you should only compare it back to the discovery of writing or similar.

Jyaif 5 days ago | parent | prev [-]

That's not an argument. Nature never worked out going into space, yet we solved it in a few decades.

jll29 5 days ago | parent | next [-]

Yes but that's "in a few decades" ON TOP of millions of years.

If I had to give an estimate, I would consider less the time taken to date, but the current state of our knowledge of how the brain works, and how it has grown in the last decades. There is almost nothing that we know so little about as the human brain, how thoughts are represented, modern imaging techniques notwithstanding.

exe34 5 days ago | parent [-]

> Yes but that's "in a few decades" ON TOP of millions of years.

If that's the bar, then anything else can fit in "a few decades", since that also rests "ON TOP of millions of years".

SoftTalker 5 days ago | parent | prev | next [-]

It worked out flying though, millions of years before we did and we still don't do it as well. We can't even do walking as well as nature did.

baq 5 days ago | parent | next [-]

Walking is easy compared to elbows, fingers and thumbs. It’s just falling over in a controlled fashion. I hear at least one company in Boston figured it out.

Anyway, humanoid robots should be big in the next 10-20 years. The compute, the batteries, the algorithms are all coming together.

derektank 5 days ago | parent | prev [-]

We do flying better. If you adjust for our body weight, a modern airliner uses less energy per traveller mile than your average migratory bird. And the airliner goes much faster.

gnz11 5 days ago | parent | prev [-]

One could argue nature solved it by evolving homo sapiens.

RaftPeople 5 days ago | parent | prev | next [-]

> I admit my feeling is that neurons/synapses probably have less than 100 bytes of memory, and also that a byte or less is more plausible, but I would like to see some more rigorous proof that they can't possibly have more than a gigabyte of memory that the synapse/neuron can access at the speed of computation.

Based on lots of reading about brain research and the relentless flow of new and unknown things that need further research, my personal gut feel is that the estimates in that paper about brain computational ability don't really have a valid foundation. There are too many things discovered since then and too many things still not understood.

Some interesting items:

1-Astrocytes are computational cells which need to be included in the math. They have internal calcium waves localized in their processes as well as across the entire cell and inter cell.

2-Recent research showed neuron signal timing down to the millisecond level carries information.

3-Individual cells (neurons and non-neurons) learn, they don't require a synapse and external cell for that capability

4-Neurons are influenced by the electromagnetic field around them and somehow that influence would need to be included in a calc on information flow

AIPedant 4 days ago | parent | prev | next [-]

I think we are severely underestimating the computational complexity of animal brains by looking at short-term reactions and snap judgements, not deep thinking or long-term learning. Axons transmit electrical signals and that's what Bostrom is taking to be an "op." But they also transmit vesicles of mRNA and proteins directly from the cytoplasm of one neuron into another, which is an "op" of unimaginable complexity compared to a neuron simply firing (or any CPU instruction), and we have no clue what that means for cognition.

tim333 5 days ago | parent | prev [-]

Re the capabilities of neurons, the argument in Moravec's paper seem quite solid, comparing the capabilities of a bit of the brain we understand quite well, the retina, to computer programs doing the same function.

My feeling is we have enough compute for ASI already but not algorithms like the brain. I'm not sure if it'll get solved by smart humans analysing it or by something like AlphaEvolve (https://news.ycombinator.com/item?id=43985489).

One advantage of computers being much quicker than needed is you can run lots of experiments.

Just the power requirements make me think current algorithms are pretty inefficient compared to the brain.