Remix.run Logo
embedding-shape 16 hours ago

It's a tradeoff though, as I think traits makes the Rust build times grow really quickly. I don't know the exact characteristics of it, also I think they speed it up compared to how it used to be, but I do remember that you'll get noticeable build slowdowns the more you use traits, especially "complicated" ones.

treyd 16 hours ago | parent | next [-]

Code is typically run many more times than it's compiled, so this is a perfectly good tradeoff to make.

embedding-shape 16 hours ago | parent | next [-]

Absolutely, was not trying to claim otherwise. But since we're engineers (at least I like to see myself as one), it's worth always keeping in mind that almost everything comes with tradeoffs, even traits :)

Someone down the line might be wondering why suddenly their Rust builds take 4x the time after merging something, and just maybe remembering this offhand comment will make them find the issue faster :)

cardanome 16 hours ago | parent | prev [-]

For release builds yes. For debug builds slow compile times kill productivity.

greener_grass 16 hours ago | parent | next [-]

If you are not willing to make this trade then how much of a priority was run-time performance, really?

esrauch 16 hours ago | parent | next [-]

It's never the case that only one thing is important.

In the extreme, you surely wouldn't accept a 1 day or even 1 week build time for example? It seems like that could be possible and not hypothetical for a 1 week build since a system could fuzz over candidate compilation, and run load tests and do PGO and deliver something better. But even if runtime performance was so important that you had such a system, it's obvious you wouldn't ever have developer cycles that take a week to compile.

Build time also even does matter for release: if you have a critical bug in production and need to ship the fix, a 1 hour build time can still lose you a lot here. Release build time doesn't matter until it does.

16 hours ago | parent | prev [-]
[deleted]
torginus 15 hours ago | parent | prev | next [-]

A lot of C++ devs advocate for simple replacements for the STL that do not rely too much on zero-cost abstractions. That way you can have small binaries, fast compiles, and make a fast-debug kinda build where you only turn on a few optimizations.

That way you can get most of the speed of the Release version, with a fairly good chance of getting usable debug info.

A huge issue with C++ debug builds is the resulting executables are unusably slow, because the zero-cost abstractions are not zero cost in debug builds.

pjmlp 14 hours ago | parent [-]

Unless one uses VC++, which can debug release builds.

Similar capabilities could be made available in other compilers.

torginus 10 hours ago | parent | next [-]

Its not just the compiler - MSVC like all others has a tendency to mangle code in release builds to such an extent that the debug info is next to useless (which to be fair is what I asked it to do, not that I fault it).

Now to hate a bit on MSVC - its Edit & Continue functionality makes debug builds unbearably slow, but at least it doesn't work, so my first thing is to turn that thing off.

pjmlp 10 hours ago | parent [-]

Which is why recent versions have dynamic debugging mode.

cyberax an hour ago | parent | prev [-]

You can debug release builds with gcc/clang just fine. They don't generate debug information by default, but you can always request it ("-O3 -g" is a perfectly fine combination of flags).

arw0n 14 hours ago | parent | prev | next [-]

I think this also massively depends on your domain, familiarity with the code base and style of programming.

I've changed my approach significantly over time on how I debug (probably in part due to Rusts slower compile times), and usually get away with 2-3 compiles to fix a bug, but spend more time reasoning about the code.

kace91 16 hours ago | parent | prev | next [-]

Doesn’t rust have incremental builds to speed up debug compilation? How slow are we talking here?

steveklabnik 16 hours ago | parent | next [-]

Rust does have incremental rebuilds, yes.

Folks have worked tirelessly to improve the speed of the Rust compiler, and it's gotten significantly faster over time. However, there are also language-level reasons why it can take longer to compile than other languages, though the initial guess of "because of the safety checks" is not one of them, those are quite fast.

> How slow are we talking here?

It really depends on a large number of factors. I think saying "roughly like C++" isn't totally unfair, though again, it really depends.

sfink 12 hours ago | parent [-]

My initial guess would be "because of the zero-cost abstractions", since I read "zero-cost" as "zero runtime cost" which implies shifting cost from runtime to compile time—as would happen with eg generics or any sort of global properties.

(Uh oh, there's an em-dash, I must be an AI. I don't think I am, but that's what an AI would think.)

steveklabnik 12 hours ago | parent [-]

I used em dashes before AI, and won't stop now :)

That's sort of part of it, but it's also specific language design choices that if they were decided differently, might make things faster.

esrauch 16 hours ago | parent | prev [-]

People do have cold Rust compiles that can push up into measured in hours. Large crates often take design choices that are more compile time friendly shape.

Note that C++ also has almost as large problem with compile times with large build fanouts including on templates, and it's not always realistic for incremental builds to solve either especially time burnt on linking, e.g. I believe Chromium development often uses a mode with .dlls dynamic linking instead of what they release which is all static linked exactly to speed up incremental development. The "fast" case is C not C++.

embedding-shape 15 hours ago | parent | next [-]

> I believe Chromium development often uses a mode with .dlls dynamic linking instead of what they release which is all static linked exactly to speed up incremental development. The "fast" case is C not C++.

Bevy, a Rust ECS framework for building games (among other things), has a similar solution by offering a build/rust "feature" that enables dynamic linking (called "dynamic_linking"). https://bevy.org/learn/quick-start/getting-started/setup/#dy...

kibwen 5 hours ago | parent | prev [-]

There's no Rust codebase that takes hours to compile cold unless 1) you're compiling a massive codebase in release mode with LTO enabled, in which case, you've asked for it, 2) you've ported Doom to the type system, or 3) you're compiling on a netbook.

dwattttt 5 hours ago | parent [-]

I'm curious if this is tracked or observed somewhere; crater runs are a huge source of information, metrics about the compilation time of crates would be quite interesting.

estebank 4 hours ago | parent [-]

I know some large orgs have this data for internal projects.

This page gives a very loose idea of how we're doing over time: https://perf.rust-lang.org/dashboard.html

therealdkz 16 hours ago | parent | prev [-]

[dead]

cogman10 15 hours ago | parent | prev [-]

AFAIK, it's not the traits that does it but rather the generics.

Rust does make it a lot easier to use generics which is likely why using more traits appears to be the cause of longer build times. I think it's just more that the more traits you have, the more likely you are to stumble over some generic code which ultimately generates more code.

embedding-shape 15 hours ago | parent [-]

> AFAIK, it's not the traits that does it but rather the generics.

Aah, yes, that sounds more correct, the end result is the same, I failed to remember the correct mechanism that led to it. Thank you for the correction!