| ▲ | andersa 4 hours ago | |
I had a similar problem when I was making a tool processing a lot of data in the browser. I'd naively made a large array of identical objects each holding a bunch of fields with numbers. Turns out, this works completely fine in Firefox. However, in Chrome, it produces millions of individual HeapNumber allocations (why is that a thing??) in addition to the objects and uses GBs of RAM, and is slow to access, making the whole thing unusable. Replacing it with a SoA structure using TypedArray made it fast in both browsers and fixed the memory overhead in Chrome. As someone more familiar with systems programming than web, the concept of creating individual heap allocations for a single double baffles me beyond belief. What were they thinking? | ||
| ▲ | kg 3 hours ago | parent [-] | |
Yeah, this is a historical design difference between Firefox's Spidermonkey JS engine and Chrome's V8. Spidermonkey uses (I'm simplifying here, there are cases where this isn't true) a trick where all values are 64-bits, and for anything that isn't a double-precision float they smuggle it inside of the bits of a NaN. This means that you can store a double, a float32, an int, or an object pointer all in a field of the same size. Great, but creates some problems and complications for asm.js/wasm because you can't rely on all the bits of a NaN surviving a trip through the JS engine. V8 instead allocates doubles on the heap. I forget the exact historical reason why they do this. IIRC they also do some fancy stuff with integers - if your integer is 31 bits or less it counts as a "smi" in that engine, or small int, and gets special performance treatment. So letting your integers get too big is also a performance trap, not just having double-precision numbers. EDIT: I found something just now that suggests Smis are now 32-bits instead of 31-bits in 64-bit builds of v8, so that's cool! | ||