| This is a great resource! Some TILs: Hashing > The default hashing algorithm is not specified, but at the time of writing the default is an algorithm called SipHash 1-3. This algorithm is high quality—it provides high protection against collisions—but is relatively slow, particularly for short keys such as integers. > An attempt to switch from fxhash back to the default hasher resulted in slowdowns ranging from 4-84%! I/O > Rust’s print! and println! macros lock stdout on every call. If you have repeated calls to these macros it may be better to lock stdout manually. Build times > If you use dev builds but don’t often use a debugger, consider disabling debuginfo. This can improve dev build times significantly, by as much as 20-40%. Interesting std library alternatives > If you have many short vectors, you can use the SmallVec type from the smallvec crate. SmallVec<[T; N]> is a drop-in replacement for Vec that can store N elements within the SmallVec itself, and then switches to a heap allocation if the number of elements exceeds that. > If you have many short vectors and you precisely know their maximum length, ArrayVec from the arrayvec crate is a better choice than SmallVec. It does not require the fallback to heap allocation, which makes it a little faster. > The SmallString type from the smallstr crate is similar to the SmallVec type. I doubt I'll change my use of the standard types often, but this is good information to know for cases where this might be applicable. Advice on enums > If an enum has an outsized variant, consider boxing one or more fields. I'm surprised I didn't see any advice about skipping proc macros or Serde for faster compile times. |
| |
| ▲ | epage 3 hours ago | parent | next [-] | | > Build times > I'm surprised I didn't see any advice about skipping proc macros or Serde for faster compile times. A more comprehensive document on build times is at https://corrode.dev/blog/tips-for-faster-rust-compile-times/ We're integrating parts of it into Cargo's official documentation at https://doc.rust-lang.org/nightly/cargo/guide/build-performa... We're coordinating the work on that at https://github.com/rust-lang/cargo/issues/16119 > > If you use dev builds but don’t often use a debugger, consider disabling debuginfo. This can improve dev build times significantly, by as much as 20-40%. We're wondering if we should split iterative development from debugging to pull in these improvements (and maybe more). This is being explored at https://github.com/rust-lang/cargo/issues/15931 | | |
| ▲ | echelon 2 hours ago | parent [-] | | Any chance we could get macros that do physical-code-on-machine codegen? (Like Google protobuf?) I love Serde to death, but the compile times are getting absolutely absurd. It's painful to sit around for three minutes after a single code change. If I could use Serde with all of its features, but have it write out the source to disk so I can commit it to our repo, that seems like it would vastly improve compile times. During development, I'd love to be able to change our API definition and be able to rapidly test changes. The productivity loss with the current state of Serde and proc macros sucks. Any chance of something like that happening? I'd use it in a heartbeat. I would donate to fund this. |
| |
| ▲ | saghm 11 hours ago | parent | prev | next [-] | | Most of these compile time improvements seem to be more along the lines of drop-in changes that don't require larger refactors. Removing something like serde from a codebase that makes use of it generally is going to be a lot more work. If you're referring to serde being brought in by a dependency when you don't need it, most well-behaved crates should already have this be something you opt into by specifying the feature rather than something you need to go out of your way to enable. That said, I've had a theory for a while now that when Rust projects end up suffering from long compile times, the most significant cause is unneeded code from dependencies getting compiled, and that the poor ergonomics around Cargo features have basically encouraged the opposite of the good behavior I described above. I've still almost never seen this discussed outside of when I bring it up, so I wrote up my thoughts on it in a blog post a while back rather than try to restate my case every time, but I don't have much hope that anyone will take it seriously enough to either convince me I'm wrong or do anything about it: https://saghm.com/cargo-features-rust-compile-times/ | | |
| ▲ | throwup238 4 hours ago | parent | next [-] | | > That said, I've had a theory for a while now that when Rust projects end up suffering from long compile times, the most significant cause is unneeded code from dependencies getting compiled, and that the poor ergonomics around Cargo features have basically encouraged the opposite of the good behavior I described above. Are you talking about clean builds or incremental builds? Rust developers care far more about the latter, and dependencies should only be monomorphized not rebuilt. | |
| ▲ | anonymousDan 8 hours ago | parent | prev [-] | | Interesting. It feels like once you have the features defined this is basically dead code elimination. To solve the transitive dependency issue could you not execute a dead code elimination pass once and cache the results? |
| |
| ▲ | tialaramex 4 hours ago | parent | prev | next [-] | | On a 64-bit machine the String type, and likewise a C++ std::string are 24 bytes, 8 bytes for a pointer to the allocated memory on the heap, then twice that for a size and capacity or their pointer equivalents depending on implementation. The 3rd party library type CompactString can fit up to 24 bytes of UTF-8 text internally, yet it is still the same size as String and just like String, Option<CompactString> is the same size as CompactString. It does add complexity (and of course a library dependency if you care about that) but if you have lots of short strings this may be the best small string type for you. [The key is UTF-8 encoding can only end with certain bytes, CompactString's documentation explains in more detail] | | | |
| ▲ | xnorswap 8 hours ago | parent | prev | next [-] | | > slow, particularly for short keys such as integers An interesting thing about the CLR is that the default hash for integers is: h(x) = x.
Which as well as being collision-free, also avoids the trap of a slow default hash. | | |
| ▲ | tialaramex 3 hours ago | parent | next [-] | | The identity function. Several C++ implementations choose this too. It is very cheap but has obvious problems which may make you wish you'd paid more up front. If you want this in Rust you can use: https://crates.io/crates/integer-hasher -- being able to swap out the hasher is something C++ folks have kinda wanted to do for like 15-20 years but they have never got it over the line. I have some benchmarks I've been noodling with for a while, measuring different ways to do a hashtable. I call the one where we just do this operation but otherwise use the ordinary Rust Swiss tables - IntHashMap in this code. For some operations, and especially at small sizes, IntHashMap is significantly better But, for other operations, and especially at large sizes, it's worse. For example suppose we have 10 K->V pairs in our hash table, when we're looking for one of the ten K values, we're much faster in IntHashMap. However, if it's not there, IntHashMap is slightly slower. Further, if we have the first ten thousands numbers instead of ten thousand random numbers, like if we'd made a hash table of serial numbers for something - we're ten times worse in IntHashMap and that's because our hashing function though fast, is very bad at its job. | |
| ▲ | mrec 6 hours ago | parent | prev | next [-] | | Do you know how it then maps hashes to buckets? I'd expect the natural occurrence of integer values to be very non-uniform and heavily biased toward the small end. | |
| ▲ | Tuna-Fish 6 hours ago | parent | prev [-] | | But if you know the hash table implementation, it makes forcing collisions trivial for user-generated input, leading to easy denial of service attacks. The first requirement for safe hashtable implementations is a secret key, which makes it impossible to know the hash value for an external observer. (Or even, to know the relative hash value between any two inputs.) | | |
| ▲ | josefx 4 hours ago | parent | next [-] | | > The first requirement for safe hashtable implementations is a secret key, Some languages use different approaches. The buckets in a Java HashMap turn into a sorted tree if they grow too large. Then there are trivial solutions like adding an input limit for untrusted users. Either way works, is secure and doesn't depend on a secret key. | |
| ▲ | xnorswap 6 hours ago | parent | prev [-] | | Did you reply to the wrong comment, I'm not sure it follows from what I posted? You can't force a collision for the function f(x) = x, by definition it has no collisions. It's not just collision-resistant, it's actually collision proof! hash(x) = x is of course not a cryptographic hash, but it also has the advantage that it's unlikely to be confused for one if anyone looks at the output. | | |
| ▲ | kstrauser 6 hours ago | parent | next [-] | | Nitpick: it’s still going to collide when taking the modulo to decide what bucket to put it in. If you know that the hashmap has 1000 buckets, and you spam it with multiples of 1000, you could make it put every key into the same bucket and have O(1) lookups turn into O(n). That doesn’t happen with real hash functions with random seeds. | | | |
| ▲ | afishhh 6 hours ago | parent | prev [-] | | It's not about collisions of the hash function itself. Every hashtable implementation will put the hash value through some sort of modulo because you generally don't want to waste memory storing the key 5726591 at index 5726591 in an array. So if you know how the implementation works and the hash function is predictable you can keep providing the program with values that will consistently go into the same bucket resulting in linear lookups and insertions. |
|
|
| |
| ▲ | zipy124 6 hours ago | parent | prev | next [-] | | the locking of stdout on every call is common amongst a lot of programming languages, a common issue when multi-threading code where every thread is allowed to print to the terminal. | |
| ▲ | andrepd 6 hours ago | parent | prev [-] | | RE ArrayVec: I would recommend the heapless crate instead, better code and nicer API. |
|