Remix.run Logo
loeg 6 hours ago

Sure! But there's a sleight of hand in the numbers here where we're talking about 25GB binaries with debuginfo and then 2GB maximum offsets in the .text section. Of those 25GB binaries, probably 24.5 of them are debuginfo. You have to get into truly huge binaries before >2GB calls become an issue.

(I wonder but have no particular insight into if LTO builds can do smarter things here -- most calls are local, but the handful of far calls can use the more expensive spelling.)

benlivengood 2 hours ago | parent [-]

At Google I worked with one statistics aggregation binary[0] that was ~25GB stripped. The distributed build system wouldn't even build the debug version because it exceeded the maximum configured size for any object file. I never asked if anyone had tried factoring it into separate pipelines but my intuition is that the extra processing overhead wouldn't have been worth splitting the business logic that way; once the exact set of necessary input logs are in memory you might as well do everything you need to them given the dramatically larger ratio of data size to code size.

[0] https://research.google/pubs/ubiq-a-scalable-and-fault-toler...