Remix.run Logo
bc569a80a344f9c 3 days ago

> So a neuron does very basic polynomial interpolation and by hooking them together you get polynomial regression

The article glosses over activation functions, which - if non-polynomial - give the entire neural networks non-linearity. A major inflection point was proving that neural networks architectures even with very few layers (as small as one) can approximate any continuous function.

https://en.m.wikipedia.org/wiki/Universal_approximation_theo...

Grimblewald 2 days ago | parent [-]

Furthermore, many apparent discontinuities can be removed or smoothed by choosing a more appropriate domain, codomain, or topology. This means a neural network can not only approximate any smooth function, but can learn to approximate many discontinious ones as well, provided these arent fundamentally discontinious.

https://en.m.wikipedia.org/wiki/Classification_of_discontinu...