| ▲ | pron an hour ago | |
> Calling anything with opt-in reference counting a GC language Except I never called it "a GC language" (whatever that means). I said, and I quote, "Rust does have a GC". And it does. Saying that it's "opt in" when most Rust programs use it (albeit to a lesser extent than Java or Go programs, provided we don't consider Rust's special case of a single reference to be GC) is misleading. > Rust has a clear purpose. To put a stop to memory safety errors. Yes, but 1. other languages do it, too, so clearly "stopping memory errors" isn't enough, 2. Rust does it in a way that requires much more use of unsafe escape hatches than other languages, so it clearly recognises the need for some compromise, and 3. Rust's safety very much comes at a cost. So its purpose may be clear, but it is also very clear that it makes tradeoffs and compromises, which implies that other tradeoffs and compromises may be reasonable, too. But anyway, having a very precise goal makes some things quantifiable, but I don't think anyone thinks that's what makes a language better than another. C and JS also have very clear purposes, but does that make them better than, say, Python? > Having a safe language is a precondition but not enough. I want it to be as performant as C as well... You need a mass of people to move there. So clearly you have a few prerequisites, not just memory safety, and you recognise the need for some pragmatic compromises. Can you accept that your prerequisites and compromises might not be universal and there may be others that are equally reasonable, all things considered? I am a proponent of software correctness and formal methods (you can check out my old blog: https://pron.github.io) and I've learnt a lot over my decades in industry about the complexities of software correctness. When I choose a low-level language, to switch away from C++ my prerequisites are: a simple language with no implicitness (I want to see every operation on the page) as I think it makes code reviews more effective (the effectiveness of code reviews has been shown empirically, although not the relationship to language design) and fast compilation to allow me to write more tests and run them more often. I'm not saying that my requirements are universally superior to yours, and my interests also lie in a high emphasis on correctness (which extends far beyond mere memory safety), it's just that my conclusions and perhaps personal preferences lead me to prefer a different path to your preferred one. I don't think anyone has any objective data to support the claim that my preferred path to correctness is superior to yours or vice-versa. I can say, however, that in the 1970s, proponents of deductive proofs warned of an impending "software crisis" and believed that proofs are the only way to avoid it (as proofs are "quantifiably" exhaustive). Twenty years later, one of them, Tony Hoare, famously admitted he was wrong, and that less easily quantifiable approaches turned out to be more effective than expected (and more effective than deductive proofs, at least of complicated properties). So the idea that an approach is superior just because it's absolute/"precise" is not generally true. Of course, we must be careful not to extrapolate and generalise in either direction, but my point is that software correctness is a very complicated subject, and nobody knows what the "best" path is, or even if there is one such best path. So I certainly expect a Rust program to have fewer memory-safety bugs than a Zig programs (though probably more than a Java program), but that's not what we care about. We want the program to have the fewest dangerous bugs overall. After all, I don't care if my user's credit-card data is stolen due to a UAF or due to SQL injection. Do I expect a Rust program to have fewer serious bugs than a Zig program? No, and maybe the opposite (and maybe the same) due to my preferred prerequisites I listed above. The problem with saying that we should all prefer the more "absolute" approach, though it could possibly harm less easily-quantifiable aspects, because it's at least absolute in whatever it does guarantee is that this belief has already been shown to not be generally true. (As a side note, I'll add that a tracing GC doesn't necessarily have a negative impact on speed, and may even have a positive one. The main tradeoff is RAM footprint. In fact, the cornerstone of tracing algorithms is that they can reduce the cost of memory management to be arbitrarily low given a large-enough heap. In practice, of course, different algorithms make much more complicated pragmatic tradeoffs. Basic refcounting collectors primarily optimise for footprint.) | ||