Remix.run Logo
akoboldfrying 3 days ago

Neural networks are a lot like brains. That they don't generally grow new neurons is something that (a) could be changed with a few lines of code and (b) seems like an insignificant detail anyway.

> the brain does not do back propagation

Do we know this? Ruling this out is tantamount to claiming that we know how brains do learn. My suspicion is that we don't currently know, and that it will turn out that, e.g., sleep does something that is a coarse approximation of backprop.

wizzwizz4 3 days ago | parent | next [-]

No, we're pretty sure brains don't do backprop. See e.g. https://doi.org/10.1038/s41598-018-35221-w

akoboldfrying 3 days ago | parent [-]

Do we know that backprop is disjoint from variational free energy minimisation? Or could it be that one is an approximation to or special case of the other? I Ctrl-F'd "backprop" and found nothing, so I think they aren't compared in the paper, but maybe this is common knowledge in the field.

wizzwizz4 3 days ago | parent [-]

Yeah: and people have made comparisons (which I can't find right now). Free energy minimisation works better for some ML tasks (better fit on less data, with less overfitting) but is computationally-expensive to simulate in digital software. (Quite cheap in a physical model, though: I might recall, or might have made up, that you can build such a system with water.)

daveguy 3 days ago | parent | prev [-]

Neural networks are barely superficially like brains in that they are both composed of multiple functional units. That is the extent of the similarity.