Remix.run Logo
baublet 8 hours ago

Reading the article, I didn’t see this answered: why not scale to more nodes if your workload is CPU bound? Spin off 1 cpu and a few gb of ram container and scale that as wide as you need?

e.g., this certainly helps when the event loop is blocked, but so could FFI calls to another language for the CPU bound work. I’d only reach for a new Node thread if these didn’t pan out, because there’s usually a LOT that goes into spinning up a new node process in a container (isolating the data, making sure any bundlers and transpilers are working, making sure the worker doesn’t pull in all the app code, etc.).

Side car processes aren’t free, either. Now your processes are contending for the same pool of resources and can’t share anything, which IME means more likelihood of memory issues, esp if there isn’t anything limiting the workers your app can spawn.

Still, good article! Love seeing the ways people tackle CPU bound work loads in an otherwise I/O bound Node app.

n_e 8 hours ago | parent | next [-]

> but so could FFI calls to another language for the CPU bound work

Worker threads can be more convenient than FFI, as you don't need to compile anything, you can reuse the main application's functions, etc.

baublet 4 hours ago | parent [-]

True! Although in a lot of Node you DO have a compile chain (typescript) you need to account for. There’s a transactional cost there to get these working well, and only sharing the code it needs. These days it’s much smaller than it used to be, though, so worker functions are seeing more use.

I make my comment to note tho that in many envs it’s easier to scale out than account for all the extra complications of multiple processes in a single container.

zer00eyz 5 hours ago | parent | prev [-]

> few gb of ram ...

5 years ago I never would have given this comment a second thought.

Now I read it and have to wonder: when does the price of ram start showing up in the butchers bill from your cloud provider?

baublet 4 hours ago | parent | next [-]

You have to pay that cost in a worker thread anyway, too. There’s no free lunch.

fragmede 5 hours ago | parent | prev [-]

I don't know about you, but my cloud provider has been charging me for the ram on my compute instances since the beginning.

zer00eyz 4 hours ago | parent [-]

Ram has always been one of the major price drivers...

But the prices have gotten stupid: https://pcpartpicker.com/trends/price/memory/

https://appleinsider.com/articles/26/02/27/the-global-ram-an...