Remix.run Logo
janalsncm 3 days ago

Those terms sound similar to biological concepts but they’re very different.

Neural networks are not like brains. They don’t grow new neurons. A “neuron” in an artificial neural net is represented with a single floating point number. Sometimes even quantized down to a 4 bit int. Their degrees of freedom are highly limited compared to a brain. Most importantly, the brain does not do back propagation like an ANN does.

LSTMs have about as much to do with brain memory as RAM does.

Attention is a specific mathematical operation applied to matrices.

Activation functions are interesting because originally they were more biologically inspired and people used sigmoid. Now people tend to use simpler ones like ReLU or its leaky cousin. Turns out what’s important is creating nonlinearities.

Hallucinations in LLMs have to do with the fact that they’re statistical models not grounded in reality.

Evolutionary algorithms, I will give you that one although they’re way less common than backprop.

akoboldfrying 3 days ago | parent | next [-]

Neural networks are a lot like brains. That they don't generally grow new neurons is something that (a) could be changed with a few lines of code and (b) seems like an insignificant detail anyway.

> the brain does not do back propagation

Do we know this? Ruling this out is tantamount to claiming that we know how brains do learn. My suspicion is that we don't currently know, and that it will turn out that, e.g., sleep does something that is a coarse approximation of backprop.

wizzwizz4 3 days ago | parent | next [-]

No, we're pretty sure brains don't do backprop. See e.g. https://doi.org/10.1038/s41598-018-35221-w

akoboldfrying 3 days ago | parent [-]

Do we know that backprop is disjoint from variational free energy minimisation? Or could it be that one is an approximation to or special case of the other? I Ctrl-F'd "backprop" and found nothing, so I think they aren't compared in the paper, but maybe this is common knowledge in the field.

wizzwizz4 3 days ago | parent [-]

Yeah: and people have made comparisons (which I can't find right now). Free energy minimisation works better for some ML tasks (better fit on less data, with less overfitting) but is computationally-expensive to simulate in digital software. (Quite cheap in a physical model, though: I might recall, or might have made up, that you can build such a system with water.)

daveguy 3 days ago | parent | prev [-]

Neural networks are barely superficially like brains in that they are both composed of multiple functional units. That is the extent of the similarity.

ants_everywhere 3 days ago | parent | prev [-]

Neural networks are explicitly modeled on brains.

I don't know where this idea that "the things haves similar names but they're unrelated" trope is coming from. But it's not from people who know what they're talking about.

Like I said, go back and read the research. Look at where it was done. Look at the title of Marvin Minksy's thesis. Look at the research on connectionism from the 40s.

I would wager that every major paper about neuroscience from 1899 to 2020 or so has been thoroughly mined by the AI community for ideas.

janalsncm 3 days ago | parent [-]

You keep saying people who disagree with you don’t know what they’re talking about. I build neural networks for a living. I’m not creating brains.

Just because a plane is named a F/A-18 Hornet doesn’t mean it shares flight mechanisms with an insect.

Artificial neural nets are very different from brains but in practice are very different, for the reasons I mentioned above, but also for the reason that no one is trying to build a brain, they are trying to predict clicks or recommend videos etc.

There is software which does attempt to model brains explicitly. So far we haven’t simulated anything more complex than a fly.