▲ | astrobe_ 3 days ago | |||||||
> These days, computers can easily recognize photos of cats, but that’s not because a clever programmer discovered a way to isolate the essence of “catness.” It could have been. It did happen in some cases as computer vision didn't wait for neural networks (e.g. OCR). But to hijack a famous quote, "Neural networks are like violence - if it doesn't solve your problems, you are not using enough of it." > A neuron with two inputs has three parameters. Two of them, called weights, determine how much each input affects the output. The third parameter, called the bias, determines the neuron’s overall preference for putting out 0 or 1. So a neuron does very basic polynomial interpolation and by hooking them together you get polynomial regression. I don't know if it amusing or amazing that people use polynomial regression to write programs now. | ||||||||
▲ | bc569a80a344f9c 3 days ago | parent [-] | |||||||
> So a neuron does very basic polynomial interpolation and by hooking them together you get polynomial regression The article glosses over activation functions, which - if non-polynomial - give the entire neural networks non-linearity. A major inflection point was proving that neural networks architectures even with very few layers (as small as one) can approximate any continuous function. https://en.m.wikipedia.org/wiki/Universal_approximation_theo... | ||||||||
|