Remix.run Logo
hedgehog 5 hours ago

In my one encounter with one of these systems it induced new code and tooling complexity, orders of magnitude performance overhead for most operations, and made dev and debug workflows much slower. All for... an occasional convenience far outweighed by the overall drag of using it. There are probably other environments where something like this makes sense but I can't figure out what they are.

jedberg 4 hours ago | parent | next [-]

I'm not sure which one you used, but ideally it's so lightweight that the benefits outweigh the slight cost of developing with them. Besides the recovery benefit, there is observability and debugging benefits too.

hedgehog an hour ago | parent [-]

I don't want to start a debate about a specific vendor but the cost was very high. Leaky serialization of call arguments and results, then hairpinning messages across the internet and back to get to workers. 200ms overhead for a no-op call. There was some observability benefit but it didn't allow for debugger access and had its own special way of packaging code so net add of complexity there too. That's not getting into the induced complexity caused by adding a bunch of RPC boundaries to fit their execution model. All that and using the thing effectively still requires understanding their runtime model. I understand the motivation, but not the technical approach.

throwaway894345 4 hours ago | parent | prev [-]

> All for... an occasional convenience far outweighed by the overall drag of using it

If you have any long-running operation that could be interrupted mid-run by any network fluke (or the termination of the VM running your program, or your program being OOMed, or some issue with some third party service that your app talks to, etc), and you don’t want to restart the whole thing from scratch, you could benefit from these systems. The alternative is having engineers manually try to repair the state and restart execution in just the right place and that scales very badly.

I have an application that needs to stand up a bunch of cloud infrastructure (a “workspace” in which users can do research) on the press of a button, and I want to make sure that the right infrastructure exists even if some deployment attempt is interrupted or if the upstream definition of a workspace changes. Every month there are dozens of network flukes or 5XX errors from remote endpoints that would otherwise leave these workspaces in a broken state and in need of manual repair. Instead, the system heals itself whenever the fault clears and I basically never have to look at the system (I periodically check the error logs, however, to confirm that the system is actually recovering from faults—I worry that the system has caught fire and there’s actually some bug in the alerting system that is keeping things quiet).

hedgehog an hour ago | parent [-]

The system I used didn't have any notion of repair, just retry-forever. What did you use for that? I've written service tree management tools that do that sort of thing on a single host but not any kind of distributed system.