Remix.run Logo
adgjlsfhk1 3 days ago

This is less about numerical instability and more that iterative algorithms with error control their error, but when you run AD on them you are ADing the approximation and a derivative of an approximation can be arbitrarily different from an approximation of a derivative.

ogogmad 3 days ago | parent [-]

That makes more sense. The title is flat out wrong IMO.

adgjlsfhk1 3 days ago | parent [-]

I think it is correct. lots of people view AD as a black box that you can throw algorithms to and get derivatives out, and this shows that that isn't true

wakawaka28 3 days ago | parent [-]

If you wrote code that failed to compile, you wouldn't impulsively call your compiler incorrect. This title sounds like it puts the blame in the wrong place. You can get error accumulation from even a basic calculation in a loop. We could try to solve these problems but it's not the algorithm's fault you don't know what you're doing.

ChrisRackauckas 17 hours ago | parent [-]

This has nothing to do with floating point error accumulation or numerical stability in the floating point sense. You can do this with arbitrary sized floating point values and you will still get the same non-convergence result.