| ▲ | andy12_ 4 hours ago | |||||||
> Even with continuous backpropagation and "learning" That's what I said. Backpropagation cannot be enough; that's not how neurons work in the slightest. When you put biological neurons in a Pong environment they learn to play not through some kind of loss or reward function; they self-organize to avoid unpredictable stimulation. As far as I know, no architecture learns in such an unsupervised way. https://www.sciencedirect.com/science/article/pii/S089662732... | ||||||||
| ▲ | torginus 2 hours ago | parent [-] | |||||||
Forgive me for being ignorant - but 'loss' in supervised learning ML context encode the difference between how unlikely (high loss) or likely (low loss) was the network in predicting the output based on the input. This sounds very similar to me as to what neurons do (avoid unpredictable stimulation) | ||||||||
| ||||||||