▲ | howardyou 3 hours ago | |||||||
This is something I've been thinking about a lot lately that I'd like to better understand. Are there any examples in physics or machine learning that you can think of that have more specific figures? | ||||||||
▲ | godelski an hour ago | parent [-] | |||||||
I'm not sure what you exactly mean. But if you are interested in the problem in general I think any book on computational physics will make you quickly face this constraint. There's a reason people love first order methods like Euler but why second or higher order methods are needed in other situations. Or maybe you could look at second order gradient descent methods as they apply to machine learning (add "Hessian" to your search). You'll see there's some tradeoffs involved. And let's just note that through first order methods alone you may not be able to even reach the same optima that second order methods can. Or you could dig into approximation theory. But I think first I'd just do some Taylor or Fourier expansions of some basic functions. This can help you get a feel of what's going on and why this relationship holds. The Taylor expansion one should be really easy. Clearly the second derivative is more computationally intensive than the first, because in order to calculate the second derivative you have to also calculate the first, right? Mind you there are functions where higher order derivatives are easier to calculate. For example, the 100th derivative of x is just as easy to calculate as the second. But these are not the classes of functions we're usually trying to approximate... | ||||||||
|