Remix.run Logo
Roxxik 3 hours ago

I'm not sure that the analogy stretches so far.

What even is an artificial neuron in an Artificial Neural Network executed on "normal" (non-neuromorphic) hardware? It is a set of weights and an activation function.

And you evaluate all neurons of a layer at the same time by multiplying their weights in a matrix by the incoming activations in a vector. Then you apply the activation function to get the outgoing activations.

Viewing this from a hardware perspective, there are no individual neurons, just matrix multiplications followed by activation functions.

I'm going out of my area of expertise here, I just started studying bioinformatics, but neurological neurons can't simply hold an activation because they communicate by depolarising their membrane. So they have to be spiking by their very nature of being a cell.

This depolarization costs a lot of energy, so they are incentived to do more with less activations.

Computer hardware doesn't have a membrane and thus can hold activations, it doesn't need spiking and these activations cost very little on their own.

So I'm not sure what we stand to gain from more complicated artificial neurons.

On the other hand, artificial neutral networks do need a lot of memory bandwidth to load in these weights. So an approach that better integrates storage and execution might help. If that is memristor tech or something else.

ilaksh 2 hours ago | parent [-]

Cerebras uses SRAM integrated into a giant chip I think. It is extremely fast inference -- they say 70 X faster than GPU clouds, over 2000 tokens per second output of a 70b model. But still uses a ton of energy as far as I know. And the chips are, I assume, expensive to produce.

Memristors might work, to get the next 10 X or 100 X in efficiency from where Cerebras is.

As far as more complex neurons, I was thinking that if each unit was on a similar order of magnitude in size but somehow could do more work, then that could be more efficient.