| ▲ | W4G1 4 hours ago | |
Interesting. Are you talking about the latency to spawn new workers, or getting data from the main thread to the worker? To give you an idea, this library uses a lazily initialized thread pool (thread-per-core by default), where tasks are shared between workers (like the Tokio library in Rust). This means workers only need to be initialized once, and passing data via structured clone is usually very fast and optimized in most engines. Better yet is to use ArrayBuffer or SharedArrayBuffer, which can be transferred or shared between threads without any serialization overhead. | ||
| ▲ | nowaymo6237 an hour ago | parent [-] | |
It usually came from serializing and deserializing objects which here it’s a shared json buffer? But even then there’s a serialization bottleneck right? You’d have to be mindful about how the context and closures work across boundaries. Then there’s also spinning up the workers, but I suppose you could do this ahead of time. Maybe my complaint is self-inflicted and is ultimately avoidable - but the complexity begins to mount. There’s also the queuing and blocking nature of web-workers, I wish they could asynchronously process messages the same way js IO works, but that’s not the case. Rather you are batching full units of work. The mental model is different. Anecdotally in Firefox I must have run into some memory leak issues and had to hard restart. Ultimately I ended up going with service workers, which yes sounds strange but I found to be much easier to work with. Cancellable requests, async, long living in the background … but maybe it just works best for me ;) | ||