▲ | groos 5 days ago | |||||||||||||||||||||||||
I always see RK(Fe) mentioned but almost never Bulirsch-Stoer (https://en.wikipedia.org/wiki/Bulirsch%E2%80%93Stoer_algorit...). Some years ago, I was looking to do high precision orbital motion (involving several bodies) simulations, and came across this in Numerical Recipes for C. I don't pretend to understand how B-S works, but it produced amazing precision with adaptive step size adjustment. For highly chaotic systems, it held out far longer than RK. | ||||||||||||||||||||||||||
▲ | adgjlsfhk1 5 days ago | parent | next [-] | |||||||||||||||||||||||||
Unfortunately the Numerical Recipies book is known for being pretty horrible in general for ODEs. For chaotic systems at high tolerances, you likely want to be using Vern9 which is a highly efficient 9th order RK solver. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | ChrisRackauckas 5 days ago | parent | prev [-] | |||||||||||||||||||||||||
B-S is just a (bad) explicit Runge-Kutta method. You can see this formally since if you take any order of the B-S method, you can write out its computation as a Runge-Kutta tableau. If you do this, you'll see it ends up being an explict RK method that has much higher f evaluations to achieve the leading truncation error coefficients and order that the leading RK methods get. For example, 9th order B-S uses IIRC about 2-3 times as many f evaluations as Vern9, while Vern9 gets like 3 orders of magnitude less error (on average w.r.t. LTE measurements). If you're curious about details of optimizing RK methods, check out https://www.youtube.com/watch?v=s_t6dIKjUUc. That isn't to say B-S methods aren't worth anything. The interesting thing is that they are a way to generate arbitrary order RK methods (even if no specific order is efficient, it gives a general scheme) and these RK methods have a lot of parallelism that can be exploited. We really worked through this a few years ago building parallelized implementations. The results are here: https://ieeexplore.ieee.org/abstract/document/9926357/ (free version https://arxiv.org/abs/2207.08135) and are part of the DifferentialEquations.jl library. The results are a bit mixed though, which is why they aren't used as the defaults. For non-stiff equations, even with parallelism it's hard to ever get them to match the best explicit RK methods, they just aren't efficient enough. For stiff equations, if you are in the regime where BLAS does not multithread yet (basically <256 ODEs, so 256x256 LU factorizations but this is BLAS/CPU-dependent), then you can use parallelization in the form of the implicit extrapolation methods in order to factorize different matrices at the same time. Because you're doing more work than something like an optimized Rosenbrock method, you don't get strictly better, but using this to get a high order method that exploits parallelism where other methods end up being serial, you can get a good 2x-4x if you tune the initial order and everything well. This is really nice because 4 ODEs - 200 ODEs, where this ends up being the fastest method for stiff ODEs that we know of right now according to the SciMLBenchmarks https://docs.sciml.ai/SciMLBenchmarksOutput/stable/, and this ends up being a sweet spot where "most" user code online tends to be. However, since it does require tuning a bit to your system, setting threads properly, having your laptop plugged in, using more memory, and possibly upping the initial order manually, it's not the most user-friendly method. Therefore it doesn't make a great default since you're putting a lot more dependencies into the user's mind for a 2x-4x... but it's a great option to have for the person really trying to optimize. This also shows different directions that would be more fruitful and beat this algorithm pretty handedly though... but that research is in progress. |