Remix.run Logo
arghwhat 15 hours ago

To be honest, I think a lot of the justification here is just a difference in standard library and ease of use.

I wouldn't consider there to be any notable effort in making thread build on target platforms in C relative to normal effort levels in C, but it's objectively more work than `std::thread::spawn(move || { ... });`.

Despite benefits, I don't actually think the memory safety really plays a role in the usage rate of parallelism. Case in point, Go has no implicit memory safety with both races and atomicity issues being easy to make, and yet relies much heavier on concurrency (with a parallelism degree managed by the runtime) with much less consideration than Rust. After all, `go f()` is even easier.

(As a personal anecdote, I've probably run into more concurrency-related heisenbugs in Go than I ever did in C, with C heisenbugs more commonly being memory mismanagement in single-threaded code with complex object lifetimes/ownership structures...)

josephg 4 hours ago | parent | next [-]

> To be honest, I think a lot of the justification here is just a difference in standard library and ease of use.

I really liked this article by Bryan Cantrill from 2018:

https://bcantrill.dtrace.org/2018/09/28/the-relative-perform...

He straight ported some C code to rust and found the rust code outperformed it by ~30% or something. The culprit ended up being that in C, he was using a hash table library he's been copy pasting between projects for years. In rust, he used BTreeMap from the standard library, which turns out to be much better optimized.

This isn't evidence Rust is faster than C. I mean, you could just backport that btreemap to C and get exactly the same performance in C code. At the limit, I think both languages perform basically the same.

But most people aren't going to do that.

If we're comparing normal rust to normal C - whatever that means - then I think rust takes the win here. Even Bryan Cantrill - one of the best C programmers you're likely to ever run into - isn't using a particularly well optimized hash table implementation in his C code. The quality of the standard tools matters.

When we talk about C, we're really talking about an ecosystem of practice. And in that ecosystem, having a better standard library will make the average program better.

mandw 2 hours ago | parent [-]

The only real question I have with this is did the program have to have any specific performance metric? I could write a small utility in python that would be completely acceptable for use but at the same time be 15x slower than an implementation in another language. So you do you compare code across languages that were not written for performance given one may have some set of functions that happens to favour one language in that particular app? I think to compare you have to at least have the goal of performance for both when testing. If he needed his app to be 30% faster he would have made it so, but it didn't need to be so he didn't. Which doesn't make it great for comparison.

   Edit, I also see that your reply was specifically about the point that the libs by themselves can help the performance with no work, and I do agree with you, as you were to the guy above.
josephg an hour ago | parent [-]

Honestly I'm not quite sure what point you're making.

> If he needed his app to be 30% faster he would have made it so

Would he have? Improving performance by 30% usually isn't so easy. Especially not in a codebase which (according to Cantrill) was pretty well optimized already.

The performance boost came to him as a surprise. As I remember the story, he had already made the C code pretty fast and didn't realise his C hash table implementation could be improved that much. The fact rust gave him a better map implementation out of the box is great, because it means he didn't need to be clever enough to figure those optimizations out himself.

Its not an apples-to-apples comparison. But I don't think comparing the world's fastest C code to the world's fastest rust code is a good comparison either, since most programmers don't write code like that. Its usually incidental, low effort performance differences that make a programming language "fast" in the real world. Like a good btree implementation just shipping with the language.

oconnor663 2 hours ago | parent | prev | next [-]

> Despite benefits, I don't actually think the memory safety really plays a role in the usage rate of parallelism.

I can see what you mean with explicit things like thread::spawn, but I think Tokio is a major exception. Multithreaded by default seems like it would be an insane choice without all the safety machinery. But we have the machinery, so instead most of the async ecosystem is automatically multithreaded, and it's mostly fine. (The biggest problems seem to be the Send bounds, i.e. the machinery again.) Cargo test being multithreaded by default is another big one.

majormajor 6 hours ago | parent | prev | next [-]

> (As a personal anecdote, I've probably run into more concurrency-related heisenbugs in Go than I ever did in C, with C heisenbugs more commonly being memory mismanagement in single-threaded code with complex object lifetimes/ownership structures...)

Is that beyond just "concurrency is tricky and a language that makes it easier to add concurrency will make it easier to add sneaky bugs"? I've definitely run into that, but have never written concurrent C to compare the ease of heisenbug-writing.

nineteen999 9 hours ago | parent | prev | next [-]

> (As a personal anecdote, I've probably run into more concurrency-related heisenbugs in Go than I ever did in C, with C heisenbugs more commonly being memory mismanagement in single-threaded code with complex object lifetimes/ownership structures...)

This is my experience too.

kannujd 6 hours ago | parent [-]

[flagged]

pornel 7 hours ago | parent | prev [-]

Go is weirdly careless about thread-safety of its built-in data structures, but GC, channels, and the race detector seem to be enough?