| ▲ | fluoridation 5 days ago |
| If we suppose that ANNs are more or less accurate models of real neural networks, the reason why they're so inefficient is not algorithmic, but purely architectural. They're just software. We have these huge tables of numbers and we're trying to squeeze them as hard as possible through a relatively small number of multipliers and adders. Meanwhile, a brain can perform a trillion fundamental simultaneously because every neuron is a complete processing element independent of every other one. To bring that back into more concrete terms, if we took an arbitrary model and turned it into a bespoke piece of hardware, it would certainly be at least one or two orders of magnitude faster and more efficient, with the downside that since it's dead silicon it could not be changed and iterated on. |
|
| ▲ | penteract 5 days ago | parent | next [-] |
| If you account for the fact that biological neurons operate at a much lower frequency than silicon processors, then the raw performance gets much closer. From what I can find, neuron membrane time constant is around 10ms [1], meaning 10 billion neurons could have 1 trillion activations per second, which is in the realm of modern hardware. People mentioned in [2] have done the calculations from a more informed position than I have, and reach numbers like 10^17 FLOPS when doing a calculation that resembles this one. [1] https://spectrum.ieee.org/fast-efficient-neural-networks-cop... [2] https://aiimpacts.org/brain-performance-in-flops/ |
|
| ▲ | mindcrime 5 days ago | parent | prev | next [-] |
| the reason why they're so inefficient is not algorithmic, but purely architectural. I would agree with that, with the caveat that in my mind "the architecture" and "the algorithm" are sort of bound up with each other. That is, one implies the other -- to some extent. And yes, fair point that building dedicated hardware might just be part of the solution to making something that runs much more efficiently. The only other thing I would add, is that - relative to what I said in the post above - when I talk about "algorithmic advances" I'm looking at everything as potentially being on the table - including maybe something different from ANN's altogether. |
|
| ▲ | HarHarVeryFunny 4 days ago | parent | prev | next [-] |
| The energy inefficiency of ANNs vs our brain is mostly because our brain operates in async dataflow mode with each neuron mostly consuming energy only when it fires. If a neuron's inputs haven't changed then it doesn't redundantly "recalculate it's output" like an ANN - it just does nothing. You could certainly implement an async dataflow type design in software, although maybe not as power efficiently as with custom silicon, but individual ANN node throughput performance would suffer given the need to aggregate neurons needing updates into a group to be fed into one the large matrix multiplies that today's hardware is optimized for, although sparse operations are also a possibility. OTOH conceivably one could save enough FLOPs that it'd still be a win in terms of how fast an input could be processed through an entire neural net. |
|
| ▲ | chasd00 5 days ago | parent | prev | next [-] |
| > If we suppose that ANNs are more or less accurate models of real neural networks i believe the problem is we don't understand actual neurons let alone actual networks of neurons to even know if any model is accurate or not. The AI folks cleverly named their data structures "neuron" and "neural network" to make it seem like we do. |
| |
| ▲ | mindcrime 3 days ago | parent | next [-] | | > The AI folks cleverly named their data structures "neuron" and "neural network" to make it seem like we do. I don't think any (serious) neural network researchers are trying to trick anybody or claim greater fidelity with the operations of the human brain than are warranted. If anything, Hinton - one of the "godfathers of neural networks" in the popular zeitgeist - has been pretty outspoken about how ANN's have only a most superficial resemblance to real neurons. Now, the "pop science" commenters, and the "talking heads" and "influencer" types and the marketing people, that's a different story... | |
| ▲ | munksbeer 5 days ago | parent | prev [-] | | This is a bit of a cynical take. Neural networks have been "a thing" for decades. A quick google suggests 1940s. I won't quibble on the timeline but no-one was trying to trick anyone with the name back then, and it just stuck around. |
|
|
| ▲ | eikenberry 5 days ago | parent | prev [-] |
| > If we suppose that ANNs are more or less accurate models of real neural networks [..] IANNs were inspired by biological neural structures and that's it. They are not representative models at all, even of the "less" variety. Dedicated hardware will certainly help, but no insights into how much it can help will come from this sort of comparison. |
| |
| ▲ | penteract 5 days ago | parent [-] | | Could you explain your claim that ANNs are nothing like real neural networks beyond their initial inspiration (if you'll accept my paraphrasing). I've seen it a few times on HN, and I'm not sure what people mean by it. By my very limited understanding of neural biology, neurons activate according to inputs that are mostly activations of other neurons. A dot product of weights and inputs (i.e. one part of matrix multiplication) together with a threshold-like function doesn't seem like a horrible way to model this. On the other hand, neurons can get a bit fancier than a linear combination of inputs, and I haven't heard anything about biological systems doing something comparable to backpropogation, but I'd like to know whether we understand enough to say for sure that they don't. | | |
| ▲ | fluoridation 5 days ago | parent | next [-] | | >I haven't heard anything about biological systems doing something comparable to backpropogation The brain isn't organized into layers like ANNs are. It's a general graph of neurons and cycles are probably common. | | |
| ▲ | HarHarVeryFunny 5 days ago | parent [-] | | Actually that's not true. Our neocortex - the "crumpled up" outer layer of our brain, which is basically responsible for cognition/intelligence, has a highly regular architecture. If you uncrumpled it, it'd be a thin sheet of neurons about the size of a teatowel, consisting of 6 layers of different types of neurons with a specific inter-layer and intra-layer pattern of connections. It's not a general graph at all, but rather a specific processing architecture. | | |
| ▲ | fluoridation 5 days ago | parent [-] | | None of what you've said contradicts it's a general graph instead of, say, a DAG. It doesn't rule out cyles either within a single layer or across multiple layers. And even if it did, the brain is not just the neocortex, and the neocortex isn't isolated from the rest of the topology. | | |
| ▲ | HarHarVeryFunny 4 days ago | parent [-] | | It's a specific architecture. Of course there are (massive amounts) of feedback paths, since that's how we learn - top-down prediction and bottom-up sensory input. There is of course looping too - e.g. thalamo-cortical loop - we are not just as pass-thru reactionary LLM! Yes, there is a lot more structure to the brain than just the neocortex - there are all the other major components (thalamus, hippocampus, etc) each with their own internal arhitecture, and then specific patterns of interconnect between them... This all reinforces what I am saying - the brain is not just some random graph - it is a highly specific architecture. | | |
| ▲ | fluoridation 4 days ago | parent [-] | | Did I say "random graph", or did I say "general graph"? >There is of course looping too - e.g. thalamo-cortical loop - we are not just as pass-thru reactionary LLM! Uh-huh. But I was responding to a comment about how the brain doesn't do something analogous to back-propagation. It's starting to sound like you've contradicted me to agree with me. | | |
| ▲ | HarHarVeryFunny 4 days ago | parent [-] | | I didn't say anything about back-progagation, but if you want to talk about that then it depends on how "analogous" you want to consider ... It seems very widely accepted that the neocortex is a prediction machine that learns by updating itself based on sensory detection of top-down prediction failures, and with multiple layers (cortical patches) of pattern learning and prediction, there necessarily has to be some "propagation" of prediction error feedback from one layer to another, so that all layers can learn. Now, does the brain learn in a way directly equivalent to backprop in terms of using exact error gradients or a single error function? No - presumably not, it more likely works in layered fashion with each higher level providing error feedback to the layer below, with that feedback likely just being what was expected vs what was detected (i.e. not a gradient - essentially just a difference). Of course gradients are more efficient in terms of selecting varying update step sizes, but directional would work fine too. It would also not be surprising if evolution has stumbled upon something similar to Bayesian updates in terms of how to optimally incrementally update beliefs (predictions) based on conflicting evidence. So, that's an informed guess of how our brain is learning - up to you whether you want to regard that as analogous to backprop or not. |
|
|
|
|
| |
| ▲ | scheme271 5 days ago | parent | prev [-] | | Neurons don't just work on electrical potentials, they also have a multiple whole systems of neurotransmitters that affect their operation. So I don't think their activation is a continuous function. Although I suppose we could use non-continuous functions for activations in a NN, I don't think there's an easy way to train a NN that does that. | | |
| ▲ | HarHarVeryFunny 4 days ago | parent [-] | | Sure, a real neuron activates by outputting a train of spikes after some input threshold has been crossed (a complex matter of synapse operation - not just a summation of inputs), while in ANNs we use "continuous" activation functions like ReLU... But note that the output of a ReLu, while continuous, is basically on or off, equivalent to a real neuron having crossed it's activation threshold or not. If you really wanted to train artificial spiking neural networks in biologically plausible fashion then you'd first need to discover/guess what that learning algorithm is, which is something that has escaped us so far. Hebbian "fire together, wire together" may be part of it, but we certainly don't have the full picture. OTOH, it's not yet apparent whether an ANN design that more closely follows real neurons has any benefit in terms of overall function, although an async dataflow design would be a lot more efficient in terms of power usage. |
|
|
|