▲ | kronicum2025 4 days ago | |
It depends on your dimensions of the system. If your system is one or low dimensional or if the function has sufficient smoothness, then it makes sense to assume that the function is a series of some kind, and then iterative solve on the least squares using gradient descent. If you pick the correct kind of series, you would get extremely quick convergence. When number of dimensions become very high or if the function becomes extremely irregular, then variations of monte carlo are generally extremely better than any other methods and have much higher accuracy than other methods, but the accuracy is still much lower than low dimensional methods. | ||
▲ | LegionMammal978 4 days ago | parent [-] | |
> It depends on your dimensions of the system. If your system is one or low dimensional or if the function has sufficient smoothness, then it makes sense to assume that the function is a series of some kind, and then iterative solve on the least squares using gradient descent. If you pick the correct kind of series, you would get extremely quick convergence. Thank you, this sounds like what I'm looking for. Would you know of any further resources on this? Most of what I've been playing with has 4 dimensions or fewer. (E.g., in one particular problem that vexed me, I had a 2D vector x(t) and an ODE x'(t) = F(t,x(t)), where F is smooth and well-behaved except for a singularity at the origin x = (0,0). It always hits the origin in finite time, so I wanted to calculate that hitting time from a given x(0). Series solutions work well in the vicinity of the origin, the problem is accurately getting to that vicinity.) |