Remix.run Logo
pron 7 hours ago

> And "opt-in non-tracing GC that isn't used largely throughout the standard library" is not a reasonable definition.

Given that refcounting and tracing are the two classic GC algorithms, I don't see what specifying "non tracing" here does, and reference-counting with special-casing of the one reference case is still reference counting. I don't know if the "reasonable definition" of GC matters at all, but if it does, this does count as one.

I agree that the one-reference case is handled in the language and the shared reference case is handled in the standard library, and I think it can be reasonable to call using just the one-reference case "not a GC", but most Rust programs do use the GC for shared references. It is also true that Rust depends less on GC than Java or Go, but that's not the same as not having one.

> When it comes to having more segfaults, we know. Zig "wins" most segfaults per issue Razzie Award.

And Rust wins the Razzie Award for most painful development and lack of similarly powerful arenas. It's like declaring that you win by paying $100 for something while I paid $50 for something else without comparing what we got for the money, or declaring that you win by getting a faster car without looking at how much I paid for mine.

> This is what happens when you ignore one type of memory safety.

When you have less safety for any property, you're guarnateed to have more violations. This is what you buy. Obviously, this doesn't mean that avoiding those extra violations is necessarily worth the cost you pay for that extra safety. When you buy something, looking just at what you pay or just at what you get doesn't make any sense. The question is whether this is the best deal for your case.

Nobody knows if there is a universal best deal here let alone what it is. What is clear is that nothing here is free, and that nothing here has infinite value.

Ygg2 6 hours ago | parent [-]

> I don't know if the "reasonable definition" of GC matters at all

If you define all non-red colors to be green, it is impossible to talk about color theory.

> And Rust wins the Razzie Award for most painful development and lack of similarly powerful arenas.

That's a non-quantifiable skill issue. Segfaults per issue is a quantifiable thing.

> When you have less safety for any property, you're guarnateed to have more violations.

If that's what you truly believed outside some debate point. Then you'd be advocating for ATS or Ada.SPARK, not Zig.

pron 6 hours ago | parent [-]

> If you define all non-red colors to be green, it is impossible to talk about color theory.

Except reference counting is one of the two classical GC algorithms (alongside tracing), so I think it's strange to treat it as "not a GC". But it is true that GC/no-GC distinction is not very meaningful given how different the tradeoffs that different GC algorithms make are. Even within these basic algorithms there are combinations. For example a mark-and-sweep collector is quite different from a moving collector, or CPython uses refcouting for some things and tracing for others.

> That's a non-quantifiable skill issue. Segfaults per issue is a quantifiable thing.

That it's not as easily quantifiable doesn't make it any less real. If we compare languages only by easily quantifiable measures, there would be few differences between them (and many if not most would argue that we're missing the differences that matter to them most). For example, it would be hard to distinguish between Java and Haskell. It's also not necessarily a "skill issue". I think that even skilled Rust users would admit that writing and maintaining a large program in TypeScript or Java takes less effort than doing the same in Rust.

Also, ATS has many more compile-time safety capabilities than either Rust or Zig (in fact, compared to ATS, Rust and Zig are barely distinguishable in what they can guarantee at runtime), so according to your measure, both Rust and Zig lose when we consider other alternatives.

> Then you'd be advocating for ATS or Ada.SPARK, not Zig.

Quite the opposite. I'm pointing out that, at least as far as this discussion goes, every added value comes with added cost that needs to be considered. If what you truly believed is that more compile-time safety always wins, then it is you who should be advocating for ATS over Rust. I'm saying that we don't know where the cost-benfit sweet point is or, indeed, even if there's only one such sweey point or multiple. I'm certainly not advocating for Zig as a universal choice. I'm advocating for selecting the right tradeoffs for every project, and I'm rejecting the claim that whatever benefits Rust or Zig have compared to the other are free. Both (indeed, all languages) require you to pay in some way to get what they're offering. In other words, I'm advocating can both be more or less appropriate than the other, depending on the situation and against the position that Rust is always superior, which is based on only looking at its advantages and ignoring its disadvantages (which, I think, are quite significant).

Ygg2 35 minutes ago | parent [-]

> Except reference counting is one of the two classical GC algorithms (alongside tracing), so I think it's strange to treat it as "not a GC". But it is true that GC/no-GC distinction is not very meaningful given how different the tradeoffs that different GC algorithms make are.

That's not the issue. Calling anything with opt-in reference counting a GC language. You're just fudging definitions to get to the desired talking point. I mean, C is, by that definition, a GC language. It can be equipped with

> That it's not as easily quantifiable doesn't make it any less real.

It makes it more subjective and easy to bias. Rust has a clear purpose. To put a stop to memory safety errors. What does it's painful to use? Is it like Lisp to Haskell or C to Lisp.

> For example, it would be hard to distinguish between Java and Haskell.

It would be possible to objectively distinguish between Java and Haskell, as long as they aren't feature-by-feature compatible.

If you can make a program that halts on that feature, you can prove you're in language with that feature.

> If what you truly believed is that more compile-time safety always wins, then it is you who should be advocating for ATS over Rust.

Yeah, because you fight a strawman. Having a safe language is a precondition but not enough. I want it to be as performant as C as well.

Second, even if you have the goal of moving to ATS, developing ATS-like isn't going to help. You need a mass of people to move there.

pron a few seconds ago | parent [-]

> Calling anything with opt-in reference counting a GC language

I didn't call it "a GC language" (I don't know what that term would even mean). I said, and I quote "Rust does have a GC". That's it. And it does. Calling it "opt in" when most Rust programs use the GC is also misleading.

> Rust has a clear purpose. To put a stop to memory safety errors.

Yes, but 1. other languages do it, too, so clearly "stopping memory errors" isn't enough, 2. Rust does it in a way that requires much more use of unsafe escape hatches than other languages, so it clearly recognises the need for some compromise, and 3. Rust's safety very much comes at a cost.

So its purpose may be clear, but it is also very clear that it makes tradeoffs and compromises, which implies that other tradeoffs and compromises may be reasonable, too.

But anyway, having a very precise goal makes some things quantifiable, but I don't think anyone thinks that's what makes a language better than another. C and JS also have very clear purposes, but does that make them better than, say, Python?

> Having a safe language is a precondition but not enough. I want it to be as performant as C as well... You need a mass of people to move there.

So clearly you have a few prerequisites, not just memory safety, and your recognise the need for some pragmatic compromises. Can you accept that your prerequisites and compromises might not be universal and there may be others that are equally reasonable, all things considered?

I am a proponent of software correctness and formal methods (you can check out my old blog: https://pron.github.io) and I've learnt a lot over my decades in industry about the complexities of software correctness. When I choose a low-level language, to switch away from C++ my prerequisites are: a simple language with no implicitness (I want to see every operation on the page) as I think it makes code reviews more effective (the effectiveness of code reviews has been shown empirically, although not the relationship to language design) and fast compilation to allow me to write more tests and run them more often.

I'm not saying that my requirements are universally superior to yours, and my interests also lie in a high emphasis on correctness (which extends far beyond mere memory safety), it's just that my conclusions and perhaps personal preferences lead me to prefer a different path to your preferred one. I don't think anyone has any objective data to support the claim that my preferred path to correctness is superior to yours or vice-versa.

I can say, however, that in the 1970s, proponents of deductive proofs warned of an impending "software crisis" and believed that proofs are the only way to avoid it (as proofs are "quantifiably" exhaustive). Twenty years later, one of them, Tony Hoare, famously admitted he was wrong, and that less easily quantifiable approaches turned out to be more effective than expected (and more effective than deductive proofs, at least of complicated properties). Again, we must be careful not to extrapolate in either direction, but my point is that software correctness is a very complicated subject, and nobody knows what the "best" path is, or even if there is one such best path.