Remix.run Logo
le-mark 4 days ago

> I had originally configured the server phoenix with only 12GB swap. I then had to restart ./build_all_fast_glibc.sh a few times because the Fil-C compilation ran out of memory. Switching to 36GB swap made everything work with no restarts; monitoring showed that almost 19GB swap (plus 12GB RAM) was used at one point. A larger server, 128 cores with 512GB RAM, took 8 minutes for Fil-C plus 6 minutes for musl, with no restarts needed.

Yikes that’s a lot of memory! Filc is doing a lot of static analysis apparently.

mbrock 4 days ago | parent [-]

I think that's the build of LLVM+Clang itself.

collinfunk 4 days ago | parent [-]

Yes, linking LLVM takes up a lot of memory. The documented guidance is to allow one link job per 15 GB of RAM [1].

[1] https://llvm.org/docs/CMake.html#frequently-used-llvm-relate...

brucehoult 4 days ago | parent | next [-]

And, fairly uniquely, LLVM has a LLVM_PARALLEL_LINK_JOBS setting that is distinct from the number of parallel jobs for everything else. I think I was using that 15 years ago.

I wish GCC had it. I have a quad core machine with 16 GB RAM that OOMs on building recent GCC -- 15 and HEAD for sure, can't remember whether 14 is affected. Enabling even 1 GB of swap makes it work. The culprit is four parallel link jobs needing ~4 GB each.

There are only four of them, so a -j8 build (e.g., with HT) is no worse.

Narishma 4 days ago | parent | prev [-]

Is that why the Rust toolchain can't be compiled on a 32-bit system?

the_mitsuhiko 3 days ago | parent [-]

It's part of the problem. Pretty sure though even rustc at this point needs more than 3GB of addressable memory.