Remix.run Logo
skavi 7 hours ago

We evaluated a few allocators for some of our Linux apps and found (modern) tcmalloc to consistently win in time and space. Our applications are primarily written in Rust and the allocators were linked in statically (except for glibc). Unfortunately I didn't capture much context on the allocation patterns. I think in general the apps allocate and deallocate at a higher rate than most Rust apps (or more than I'd like at least).

Our results from July 2025:

rows are <allocator>: <RSS>, <time spent for allocator operations>

  app1:
  glibc: 215,580 KB, 133 ms
  mimalloc 2.1.7: 144,092 KB, 91 ms
  mimalloc 2.2.4: 173,240 KB, 280 ms
  tcmalloc: 138,496 KB, 96 ms
  jemalloc: 147,408 KB, 92 ms

  app2, bench1
  glibc: 1,165,000 KB, 1.4 s
  mimalloc 2.1.7: 1,072,000 KB, 5.1 s
  mimalloc 2.2.4:
  tcmalloc: 1,023,000 KB, 530 ms

  app2, bench2
  glibc: 1,190,224 KB, 1.5 s
  mimalloc 2.1.7: 1,128,328 KB, 5.3 s
  mimalloc 2.2.4: 1,657,600 KB, 3.7 s
  tcmalloc: 1,045,968 KB, 640 ms
  jemalloc: 1,210,000 KB, 1.1 s

  app3
  glibc: 284,616 KB, 440 ms
  mimalloc 2.1.7: 246,216 KB, 250 ms
  mimalloc 2.2.4: 325,184 KB, 290 ms
  tcmalloc: 178,688 KB, 200 ms
  jemalloc: 264,688 KB, 230 ms
tcmalloc was from github.com/google/tcmalloc/tree/24b3f29.

i don't recall which jemalloc was tested.

hedora 6 hours ago | parent | next [-]

I’m surprised (unless they replaced the core tcmalloc algorithm but kept the name).

tcmalloc (thread caching malloc) assumes memory allocations have good thread locality. This is often a double win (less false sharing of cache lines, and most allocations hit thread-local data structures in the allocator).

Multithreaded async systems destroy that locality, so it constantly has to run through the exception case: A allocated a buffer, went async, the request wakes up on thread B, which frees the buffer, and has to synchronize with A to give it back.

Are you using async rust, or sync rust?

skavi 6 hours ago | parent [-]

modern tcmalloc uses per CPU caches via rseq [0]. We use async rust with multithreaded tokio executors (sometimes multiple in the same application). so relatively high thread counts.

[0]: https://github.com/google/tcmalloc/blob/master/docs/design.m...

usrnm 5 hours ago | parent [-]

How do you control which CPU your task resumes on? If you don't then it's still the same problem described above, no?

skavi an hour ago | parent [-]

on the OS scheduler side, i'd imagine there's some stickiness that keeps tasks from jumping wildly between cores. like i'd expect migration to be modelled as a non zero cost. complete speculation though.

tokio scheduler side, the executor is thread per core and work stealing of in progress tasks shouldn't be happening too much.

for all thread pool threads or threads unaffiliated with the executor, see earlier speculation on OS scheduler behavior.

packetlost 38 minutes ago | parent [-]

Correct. The Linux scheduler has been NUMA aware + sticky for awhile (which is more or less what this reduces to in common scenarios).

ComputerGuru 6 hours ago | parent | prev | next [-]

That’s a considerable regression for mimalloc between 2.1 and 2.2 – did you track it down or report it upstream?

Edit: I see mimalloc v3 is out – I missed that! That probably moots this discussion altogether.

skavi 6 hours ago | parent [-]

nope.

codexon 5 hours ago | parent | prev [-]

This is similar to what I experienced when I tested mimalloc many years ago. If it was faster, it wasn't faster by much, and had pretty bad worst cases.