| ▲ | softfalcon 7 hours ago | |||||||
What happens when I send an extremely high throughput of data and the scheduler decides to pause garbage collection due to there being too many interrupts to my process sending network events? (a common way network data is handed off to an application in many linux distros) Are there any concerns that the extra array overhead will make the application even more vulnerable to out of memory errors while it holds off on GC to process the big stream (or multiple streams)? I am mostly curious, maybe this is not a problem for JS engines, but I have sometimes seen GC get paused on high throughput systems in GoLang, C#, and Java, which causes a lot of headaches. | ||||||||
| ▲ | conartist6 6 hours ago | parent [-] | |||||||
Yeah I don't think that's generally a problem for JS engines because of the incremental garbage collector. If you make all your memory usage patterns possible for the incremental collector to collect, you won't experience noticeable hangups because the incremental collector doesn't stop the world. This was already pretty important for JS since full collections would (do) show up as hiccups in the responsiveness of the UI. | ||||||||
| ||||||||