Remix.run Logo
timefirstgrav 5 days ago

This aligns with what I've observed in computational physics.

Trying to handle instantaneous constraints and propagating modes with a single timestepper is often suboptimal.

I developed a framework that systematically separates these components: using direct elliptic solvers for constraints and explicit methods for flux evolution. The resulting algorithms are both more stable and more efficient than unified implicit approaches.

The key insight is that many systems (EM, GR, fluids, quantum mechanics) share the same pattern: - An elliptic constraint equation (solved directly, not timestepped) - A continuity law for charge/mass/probability flux - Wave-like degrees of freedom (handled with explicit methods)

Witgh this structure, you can avoid the stiffness issues entirely rather than trying to power through them with implicit methods.

Paper: https://zenodo.org/records/16968225

semi-extrinsic 5 days ago | parent | next [-]

You are probably very familiar with it, but this has been the basis of most numerical solvers for the Navier-Stokes equations since the late 1960s:

https://en.wikipedia.org/wiki/Projection_method_(fluid_dynam...

A disadvantage is that you get a splitting error term that tends to dominate, so you gain nothing from using higher-order methods in time.

wizzwizz4 5 days ago | parent | prev | next [-]

Please don't "publish" on Zenodo. If you think your work has merit, go arXiv -> peer review -> open access journal. Otherwise, put it on your own website. Zenodo is a repository for artefacts (mainly datasets): if you try to put papers on it, people will think you're a crank. It's about as damaging for your reputation (and the reputation of your work) as a paper mill.

Of course, make sure you've done a thorough literature search, and that your paper is written from the perspective of "what is the contribution of this paper to the literature?", since most people reading your work will not read it in isolation: it'll be the hundredth or thousandth paper they've skimmed, trying to find those dozen papers relevant to their work.

abdullahkhalids 4 days ago | parent | next [-]

Just to add, people in the field are unlikely to find a paper on Zenodo. I don't think any of the major search engines or databases for papers will include anything on Zenodo in their results.

That said, posting on arXiv can't be done unless someone vouches for you, which might be difficult or not for an independent person.

I think the best bet would be to submit your paper directly to a journal. However the paper in GP is unlikely to be published by any reputable journal. One direct feedback: if you can't explain at the start why your paper is relevant to current researcher's then why should anyone care? A sniff test for this is that you discuss in the introduction recent papers which have tried to solve the same or similar problems. But GP paper's references are over two decades old.

wizzwizz4 2 days ago | parent [-]

Getting someone to endorse you on arXiv is easy if you have work to show. (If you don't get endorsed, it's theoretically possible that the endorser might plagiarise your work, but you'll have a paper trail to prove it; I've heard more stories of PhD supervisors plagiarising their students' work than of arXiv endorsers doing so.) It's hard for us plebs who don't yet have work to put on the arXiv to get endorsements, but that's not actually much of a problem.

4 days ago | parent | prev [-]
[deleted]
physicsguy 4 days ago | parent | prev | next [-]

In practical terms it’s not unusual to reset the integrator when something instantaneous happens. When i did magnetic research, an application of instantaneous field for e.g. usually required this because otherwise the adaptive integrator spends a lot of time reducing the time step size

tomrod 5 days ago | parent | prev | next [-]

> Trying to handle instantaneous constraints and propagating modes with a single timestepper is often suboptimal.

When I read statements like this, I wonder if this is related to optimal policy conditions required for infinitely lived Bellman equations to have global and per-period policies in alignment

timefirstgrav 5 days ago | parent [-]

That's a fascinating parallel! both involve separating timeless constraints (value function / elliptic equation) from temporal dynamics (policy / flux evolution).

Trying to timestep through a constraint that should hold instantaneously creates artificial numerical difficulties. The Bellman equation's value iteration is actually another example of this same pattern...

tomrod 4 days ago | parent [-]

The core conditions for Bellman policy equivalence are pretty straightforward and handled in Stokey/Lucas, Recursive Dynamics:

[1] Discounting: The discount factor β ∈ (0,1) is crucial. It ensures convergence of the value function and prevents “infinite accumulation” problems.

[2] Compactness of state/action sets: The feasible action correspondence Γ(x) is nonempty, compact-valued, and upper hemicontinuous in the state x. The state space X is compact (or at least the feasible set is bounded enough to avoid unbounded payoffs).

[3] Continuity: The return (or reward) function u(x,a) is continuous in (x,a). The transition law f(x,a) is continuous in (x,a).

[4] Bounded rewards: u(x,a) is bounded (often assumed continuous and bounded). This keeps the Bellman operator well-defined and ensures contraction mapping arguments go through.

[5] Contraction mapping property: With discounting and bounded payoffs, the Bellman operator is a contraction on the space of bounded continuous functions. This guarantees existence and uniqueness of the value function V.

[6] Measurable selection for policies: Under the above continuity and compactness assumptions, the maximum in the Bellman equation is attained, and there exists a measurable policy function g(x) that selects optimal actions.

5 days ago | parent | prev [-]
[deleted]