| ▲ | KingOfCoders 3 days ago |
| "All it took was some basic understanding of memory management and a bit of discipline." The words of every C programmer who created a CVE. |
|
| ▲ | pron 3 days ago | parent | next [-] |
| What about every Java/JS/Python/Rust/Go programmer who ever created a CVE? Out-of-bounds access is, indeed, a very common cause of dangerous vulnerabilities, but Zig eliminates it to the same extent as Rust. UAF is much lower on the list, to the point that non-memory-safety-related causes easily dominate it.[1] The question is, then, what price in language complexity are you willing to pay to completely avoid the 8th most dangerous cause of vulnerabilities as opposed to reducing them but not eliminating them? Zig makes it easier to find UAF than in C, and not only that, but the danger of UAF exploitability can be reduced even further in the general case rather easily (https://www.cl.cam.ac.uk/~tmj32/papers/docs/ainsworth20-sp.p...). So it is certainly true that memory unsafety is a cause of dangerous vulnerabilities, but it is the spatial unsafety that's the dominant factor here, and Zig eliminates that. So if you believe (rightly, IMO) that a language should make sure to reduce common causes of dangerous vulnerabilities (as long as the price is right), then Zig does exactly that! I don't think it's unreasonable to find the cost of Rust justified to eliminate the 8th most dangerous cause of vulnerabilities, but I think it's also not unreasonable to prefer not to pay it. [1]: https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html |
| |
| ▲ | johncolanduoni 3 days ago | parent | next [-] | | I don't think the rank on a list that includes stuff like SQL injection and path traversal tells you much about what language features are worthwhile in the C/C++ replacement space. No developer that works on something like Linux or Chromium would introduce a SQL injection vulnerability unless they experienced severe head trauma. They do still introduce use after free vulnerabilities with some regularity. | | |
| ▲ | pron 3 days ago | parent [-] | | First, UAF can be made largely non-dangerous without eliminating it (as in the link above and others). It's harder to exploit to begin with, and can be made much harder still virtually for free. So the number of UAFs and the number of exploitable vulnerabilities due to UAF are not the same, and have to be treated as separate things (because they can be handled separately). Second, I don't care if my bank card details leak because of CSRF or because of a bug in Chromium. Now, to be fair, the list of dangerous vulnerabilities weighs things by number of incidents and not by number of users affected, and it is certainly true that more people use Chrome than those who use a particular website vulnerable to CSRF. But things aren't so simple, there, too. For example, I work on the JVM, which is largely written in C++, and I can guarantee that many more people are affected by non-memory-safety vulnerabilities in Java programs than by memory-safety vulnerabilities in the JVM. Anyway, the point is that the overall danger and incidence of vulnerabilities - and therefore the justified cost in addressing all the different factors involved - is much more complicated than "memory unsafety bad". Yes, it's bad, but different kinds of memory unsafety are bad to different degrees, and the harm can be controlled separately from the cause. Now, I think it's obvious that even Rust fans understand there's a complex cost/benefit game here, because most software today is already written in memory-safe languages, and the very reason someone would want to use a language like Rust in the first place is because they recognise that sometimes the cost of other memory-safe languages isn't worth it, despite the importance of memory safety. If both spatial and temporal safety were always justified at any reasonable cost (that is happily paid by most software already), then there would be no reason for Rust to exist. Once you recognise that, you have to also recognise that what Rust offers must be subject to the same cost/benefit analysis that is used to justify it in the first place. And it shouldn't be surprising that the outcome would be similar: sometimes the cost may be justified, sometimes it may not be. | | |
| ▲ | johncolanduoni 3 days ago | parent [-] | | > I don't care if my bank card details leak because of CSRF or because of a bug in Chromium Sure, but just by virtue of what these languages are used for, almost all CSRF vulnerabilities are not in code written in C, C++, Rust, or Zig. So if I’m targeting that space, why would I care that some Django app or whatever has a CSRF when analyzing what vulnerabilities are important to prevent for my potential Zig project? You’re right that overall danger and incidence of vulnerabilities matter - but they matter for the actual use-case you want to use the language for. The Linux kernel for example has exploitable TOCTOU vulnerabilities at a much higher rate than most software - why would they care that TOCTOU vulnerabilities are rare in software overall when deciding what complexity to accept to reduce them? | | |
| ▲ | pron 3 days ago | parent [-] | | To your first point, you're sort-of right, but it's still not so simple. The value of vulnerabilities in V8 or in CPython is affected by their likelihood compared to other vulnerabilities to users using that same product (i.e. in JS or Python code). If I want to know how much I should pay for a screw in some machine, the answer isn't "whatever it costs to minimise the chance of the screw failing regardless of other components". Once the chance of a fault in the screw is significantly lower than the chance of the machine failing for other reasons, there's no much point in getting a more resilient screw, as it would have little to no effect on the resilience of the machine. The rate of vulnerabilities obviously can't be zero, but it also doesn't need to be. It needs to be low enough for the existing coping processes to work well, and those processes need to be applied anyway. So really the question is always about cost: what's the cheapest way for me to get to a desired vulnerability rate? Which brings me to why I may prefer a low-level language that doesn't prevent UAF: because the language that does present UAF has a cost that is not worth it for me, either because UAF vulnerabilities are not a major risk for my application or because I have cheaper ways to prevent them (without necessarily eliminating the possibility of UAF itself), such as with one of the modern pointer-tagging techniques. | | |
| ▲ | johncolanduoni 3 days ago | parent [-] | | I buy that Zig would fit the risk/reward envelope for some projects. My only issue was with using a breakdown of amalgamated CVEs where most of the software would never actually be written in Zig or Rust regardless to demonstrate that. Perhaps that misunderstanding about my claims is most of the source of our disagreement. To your point about V8 and CPython: that calculus makes sense if I’m Microsoft and I could spend time/money on memory safety in CPython or on making CSRF in whatever Python library I use harder. My understanding is that the proportions of the budget for different areas of vulnerability research at any tech giant would in fact vindicate this logic. However, if I’m on the V8 team or a CPython contributor and I’m trying to reduce vulnerabilities, I don’t have any levers to pull for CSRF or SQL injection without just instead working on a totally different project that happens to be built on the relevant language. If my day job is to reduce vulnerabilities in V8 itself, those would be totally out of scope and everybody would look at my like I’m crazy if I brought it up in a meeting. Similarly, if I’m choosing a language to (re)write my software in and Zig is on the table, I am probably not super worried about CSRF and SQL injection - most likely I’m not writing an API accessed by a browser or interfacing with a SQL database at all! Also I have faith that almost all developers who know what Zig is in the first place would not write code with a SQL injection vulnerability in any language. That those are still on the top ten list is a condemnation of our entire species, in my book. | | |
| ▲ | pron 3 days ago | parent [-] | | > If my day job is to reduce vulnerabilities in V8 itself, those would be totally out of scope and everybody would look at my like I’m crazy if I brought it up in a meeting. Maybe (and I'll return to that later), but even if the job were to specifically reduce vulnerabilities in V8, it may not be the case that focusing on UAF is the best way to go, and even if it were, it doesn't mean that eliminating UAF altogether is the best way to reduce UAF vulnerabilities. More generally, memory safety => fewer vulnerabilities doesn't mean fewer vulnerabilities => memory safety. When some problem is a huge cause of exploitable vulnerabilities and eliminating it is cheap - as in the case of spatial memory safety - it's pretty easy to argue that eliminating it is sensible. But when it's not as big a cause, when the exploits could be prevented in other ways, and when the cost of eliminating the problem at the source is high, it's not so clear cut that that's the best way to go. The costs involved could actually increase vulnerabilities overall. A more complex language could have negative effects on correctness (and so on security) in some pretty obvious ways: longer build times could mean less testing; less obvious code could mean more difficult reviews. But I would say that there's even a problem with your premise about "the job". The more common vulnerabilities are in JS, the less value there is in reducing them in V8, as the relative benefit to your users will be smaller. If JS vulnerabilities are relatively common, there could, perhaps, be more value to V8 users in improving V8's performance than in reducing its vulnerabilities. BTW, this scenario isn't so hypothetical for me, as I work on the Java platform, and I very much prefer spending my time on trying to reduce injection vulnerabilities in Java than on chasing down memory-safety-related vulnerabilities in HotSpot (because there's more security value to our users in the former than in the latter). I think Zig is interesting from a programming-language design point of view, but I also think it's interesting from a product design point of view in that it isn't so laser-focused on one thing. It offers spatial memory safety cheaply, which is good for security, but it also offers a much simpler language than C++ (while being just as expressive) and fast build times, which could improve productivity [1], as well as excellent cross-building. So it has something for everyone (well, at least people who may care about different things). [1]: These could also have a positive effect on correctness, which I hinted at before, but I'm trying to be careful about making positive claims on that front because if there's anything I've learnt in the field of software correctness is that things are very complicated, and it's hard to know how to best achieve correctness. Even the biggest names in the field have made some big, wrong predictions. | | |
| ▲ | johncolanduoni 3 days ago | parent [-] | | > BTW, this scenario isn't so hypothetical for me, as I work on the Java platform, and I very much prefer spending my time on trying to reduce injection vulnerabilities in Java than on chasing down memory-safety-related vulnerabilities in HotSpot (because there's more security value to our users in the former than in the latter). That's a good example and I agree with you there. I think the difference with V8 though is twofold: 1. Nobody runs fully untrusted code on HotSpot today and expects it to stop anybody from doing anything. For browser JavaScript engines, of course the expectation is that the engine (and the browser built on it) are highly resistant to software sandbox escapes. A HotSpot RCE that requires a code construction nobody would actually write is usually unexploitable - if you can control the code the JVM runs, you already own the process. A JavaScript sandbox escape is in most cases a valuable part of an exploit chain for the browser. 2. Even with Google's leverage on the JS and web standardization processes, they have very limited ability to ship user-visible security features and get them adopted. Trusted Types, which could take a big chunk out of very common XSS vulnerabilities and wasn't really controversial, was implemented in Safari 5 years after Chrome shipped it. Firefox still doesn't support it. Let's be super optimistic and say that after another 5 years it'll be as common as CSP is today - that's ten years to provide a broad security benefit. These are of course special aspects of V8's security environment, but having a mountain of memory safe code you can tweak on top of your unsafe code like the JVM has is also unusual. The main reason I'd be unlikely to reach for Zig + temporal pointer auth on something I work on is that I don't write a lot of programs that can't be done in a normie GC-based memory safe programming language, but for which having to debug UAF and data race bugs (even if they crash cleanly!) is a suitable tradeoff for the Rust -> Zig drop in language complexity. | | |
| ▲ | pron 2 days ago | parent [-] | | I agree with your observations about the differences between HotSpot and V8, but my general point is precisely that where you want to focus for security is complicated and application-specific, and that the relative risk of different vulnerability causes does matter. As to your last point, I certainly accept that that could be the case for some, but the opposite is also likely: if UAF is not an outsized cause of problems, then a simpler language that, hopefully, can make catching/debugging all bugs easier could be more attractive than one that could be tilting too much in favour of eliminating UAF possibly at the expense of other problems. My point being that it seems like there are fine reasons to prefer a Rust-like approach over a Zig-like approach and vice-versa in different situations, but we simply don't yet know enough to tell which one - if any - is universally or even more commonly superior to the other. |
|
|
|
|
|
|
| |
| ▲ | pjmlp 3 days ago | parent | prev [-] | | Ideally neither Zig nor Rust would matter. Languages like Modula-3 or Oberon would have taken over the world of systems programming. Unfortunately there are too many non-believers for systems programming languages with automatic resource management to take off as they should. Despite everything, kudos to Apple for pushing Swift no matter what, as it seems to be only way for adoption. | | |
| ▲ | pron 3 days ago | parent | next [-] | | > Unfortunately there are too many non-believers for systems programming languages with automatic resource management to take off as they should. Or those languages had other (possibly unrelated) problems that made them less attractive. I think that in a high-economic-value, competitive activity such as software, it is tenuous to claim that something delivers a significant positive gain and at the same time that that gain is discarded for irrational reasons. I think at least one of these is likely to be false, i.e. either the gain wasn't so substantial or there were other, rational reasons to reject it. | | |
| ▲ | johncolanduoni 3 days ago | parent | next [-] | | I’m not willing to go to bat for Oberon, but large swaths of software engineering are done with no tradeoff analysis of different technologies at all. Most engineers know one imperative programming language and maybe some SQL. If you ask them what to use, they will simply wax poetic about how the one language they know is the perfect fit for the use-case. Even for teams further toward the right of the bell curve, historical contingencies have a greater impact than they do in more grounded engineering fields. There are specialties of course, but nobody worries that when they hire a mechanical engineer someone needs to make sure the engineer can make designs with a particular brand of hex bolt because the last 5 years of the company’s designs all use that brand. | | |
| ▲ | pron 3 days ago | parent [-] | | If a language offered a significant competitive advantage, such an analysis wouldn't be necessary. Someone would capitalise on it, and others would follow. There are selective pressures in software. Contingencies play an outsized role only when the intrinsics don't. | | |
| ▲ | johncolanduoni 3 days ago | parent [-] | | My point is that for most of the software world, selective pressures are much weaker than things like switching costs and ecosystem effects. The activation energy for a new tech stack is massive - so it's very easy to get stuck in local maxima for a long time. | | |
| ▲ | pron 2 days ago | parent [-] | | You call it weak selective pressures, but another way of saying it is low fitness advantage. And we can see that because the programming language landscape is far from static, and newcomers do gain adoption very quickly every now and then. In fact, when we look at the long list of languages that have become super-popular and even moderately popular - including languages that have grown only to later shrink rather quickly - say Fortran, COBOL, C, C++, JavaScript, Java, PHP, Python, Ruby, C#, Kotlin, Go, TypeScript, we see languages that are either more specific to some domains or more general, some reducing switching costs (TS, Kotlin) some not, but we do see that the adoption rate is proportional to the language's peak market share, and once the appropriate niche is there (think of a possibly new/changed environment in biological evolution) we see very fast adoption, as we'd expect to see from a significant fitness increase. So given that many languages displace incumbents or find their own niches, and that the successful ones do it quickly, I think that the most reasonable assumption to start with when a language isn't displaying that is that its benefits just aren't large enough in the current environment(s). If the pace of your language's adoption is slow, then: 1. the first culprit to look for is the product-market fit of the language, and 2. it's a bad sign for the language's future prospects. I guess it's possible for something with a real but low advantage to spread slowly and reach a large market share eventually, but I don't think it's ever happened in programming languages, and there's the obvious risk of something else with a bigger advantage getting your market in the meantime. |
|
|
| |
| ▲ | pjmlp 3 days ago | parent | prev [-] | | As proven in several cases, it is mostly caused by management not willing to keep the required investment to make it happen. Projects like Midori, Swift, Android, MaximeVM, GraalVM, only happen when someone high enough is willing to keep it going until it takes off. When they fail, usually it is because management backing felt through, not because there wasn't a way to sort out whatever was the cause. Even Java had enough backing from Sun, IBM, Oracle and BEA during its early uncertainty days outside being a language for applets, until it actually took off on server and mobile phones. If Valhala never makes it, it is because Oracle gave up funding the team after all these years, or it is impossible and it was a waste of money? |
| |
| ▲ | lerno 3 days ago | parent | prev [-] | | Unfortunately Swift is a mess of a language, trying to put as many language features in there as possible. While still not getting close to being a good replacement for Objective-C. AND it's the slowest language to compile among languages with a substantial adoption. It's just pig-headedness by Apple, nothing more. | | |
| ▲ | pjmlp 3 days ago | parent [-] | | I agree with the toolchain problems, the rest we don't need another Go flavour, with its boilerplate and anti language research culture. | | |
| ▲ | lerno 2 days ago | parent [-] | | It is well known that Swift set out in design without any prior knowledge of the language it was replacing (Objective-C), with only the most junior in the team having used it to any greater extent. Instead Swift was designed around the use-cases the team was familiar with, which would be C++ and compilers. Let's just say that the impedance between that and rapid UI development was pretty big. From C++ they also got the tolerance for glacial compile times (10-50 times as slow as compiling the corresponding Objective-C code) In addition to that they did big experiments, such as value semantics backed by copy-on-write, which they thought was cool, but is – again – worthless in terms of the common problem domains. Since then, the language's just been adding features at a speed even D can't match. However, one thing the language REALLY GETS RIGHT, and which is very under-appreciated, is that they duplicated Objective-C's stability across API versions. ObjC is best in class when it comes to the ability to do forward and backwards compatibility, and Swift has some AWESOME work to make that work despite the difficulties. |
|
|
|
|
|
| ▲ | dayvster 3 days ago | parent | prev | next [-] |
| Segfaults go brrr. All jokes aside, it doesn’t actually take much discipline to write a small utility that stays memory safe. If you keep allocations simple, check your returns, and clean up properly, you can avoid most pitfalls. The real challenge shows up when the code grows, when inputs are hostile, or when the software has to run for years under every possible edge case. That’s where “just be careful” stops working, and why tools, fuzzing, and safer languages exist. |
| |
| ▲ | KingOfCoders 3 days ago | parent | next [-] | | My assumption is a small utility becomes a big utility. | |
| ▲ | uecker 3 days ago | parent | prev [-] | | And a segfault would be worse than a panic, data corruption or out of memory access are the problems. But in reality, most C programs I use daily have never crashed in decades. |
|
|
| ▲ | dev_l1x_be 3 days ago | parent | prev | next [-] |
| The amount of seggfaults I have seen with Ghostty did not raise my spirits. |
| |
| ▲ | dpatterbee 3 days ago | parent | next [-] | | I've had at least one instance of Ghostty running on both my work and personal machine continuously since I first got access to the beta last November, and I haven't seen a single segfault in that entire time. When have you seen them? | | |
| ▲ | metaltyphoon 3 days ago | parent | next [-] | | Look at the issue tracker and its history too. | |
| ▲ | dmit 3 days ago | parent | prev | next [-] | | I've seen the amount of effort Mitchell &co put into ensuring memory safety of Ghostty in the 1.2 release notes, but after upgrading I am still afraid to open a new pane while there's streaming output in the current one because in 1.1.3 that meant a crash more often than not. | |
| ▲ | mr90210 3 days ago | parent | prev [-] | | Google: "wikipedia Evidence of absence" Also, https://github.com/ghostty-org/ghostty/issues?q=segfault | | |
| ▲ | dpatterbee 3 days ago | parent [-] | | So Ghostty was first publicly released on I think December 27th last year, then 1.0.1, 1.1.0, 1.1.1, and 1.1.2 were released within the next month and a half to fix bugs found by the large influx of users, and there hasn't been a segfault reported since. I would recommend that users who are finding a large number of segfaults should probably report it to the maintainers. |
|
| |
| ▲ | hnaccount19293 3 days ago | parent | prev | next [-] | | Bun is much worse in this regard too. | | |
| ▲ | johncolanduoni 3 days ago | parent [-] | | It makes me sad, because they demonstrated JavaScriptCore is shockingly better than V8 for node-likes. The Typescript compiler (which like basically any non-trivial typechecker is CPU bound) is consistently at least 2x faster with Bun on large projects I've worked on. | | |
| ▲ | pjmlp 3 days ago | parent [-] | | When Typescript finishes their Go rewrite that will become irrelevant, and I rather have the compiler from the same people that design the language. | | |
| ▲ | johncolanduoni 3 days ago | parent [-] | | For that example sure, and admittedly the entire JavaScript/TypeScript processing ecosystem is moving in that direction. But the TypeScript compiler is not the only CPU-bound JavaScript out there. | | |
| ▲ | pjmlp 3 days ago | parent [-] | | There are plenty of memory safe compiled languages to rewrite that JavaScript into. |
|
|
|
| |
| ▲ | neerajk 3 days ago | parent | prev | next [-] | | segfaults raise my belief in spirits | | |
| ▲ | greesil 3 days ago | parent [-] | | Possibly a good Halloween costume idea to go as a segfault. It would scare some people. |
| |
| ▲ | txdv 3 days ago | parent | prev [-] | | I haven't seen a single one. | | |
|
|
| ▲ | quotemstr 3 days ago | parent | prev | next [-] |
| > The words of every C programmer who created a CVE. Much of Zig's user base seems to be people new to systems programming. Coming from a managed code background, writing native code feels like being a powerful wizard casting fireball everywhere. After you write a few unsafe programs without anything going obviously wrong, you feel invincible. You start to think the people crowing about memory safety are doing it because they're stupid, or, cowards, or both. You find it easy to allocate and deallocate when needed: "just" use defer, right? Therefore, it someone screws up, that's a personal fault. You're just better, right? You know who used to think that way? Doctors. Ignaz Semmelweis famously discovered that hand-washing before childbirth decreased morality by an order of magnitude. He died poor and locked in an asylum because doctors of the day were too proud to acknowledge the need to adopt safety measures. If mandatory pre-surgical hand-washing step prevented complication, that implied the surgeon had a deficiency in cleanliness and diligence, right? So they demonized Semmelweis and patients continued for decades to die needlessly. I'm sure that if those doctors had been on the internet today, they would say, as the Zig people do say, "skill issue". It takes a lot of maturity to accept that even the most skilled practitioners of an art need safety measures. |
| |
| ▲ | lerno 3 days ago | parent [-] | | I can't speak for Zig users, but an interesting alternative to just new/delete or malloc/free and various garbage collection strategies is pervasive use of temp allocation using arenas, such as Jai and Odin's temp allocators (essentially frame allocators) and C3's stack-like temp allocator. Zig also favours using arenas, but more ad hoc. What happens in those cases is that you drop a whole lot of disorganized dynamic and stack allocations and just handle them in a batch. So in all cases where the problem is tracking temporary objects, there's no need to track ownership and such. It's a complete non-problem. So if you're writing code in domains where the majority of effort to do manual memory management is tracking temporary allocations, then in those cases you can't really meaningfully say that because Rust is safer than a corresponding malloc/free program in C/C++ it's also safer than the C3/Jai/Odin/Zig solution using arenas. And I think a lot of the disagreement comes from this. Rust devs often don't think that switching the use of the allocator matters, so they argue against what's essentially a strawman built from assumed malloc/free based memory patterns that are incorrect. ON THE OTHER HAND, there are cases where this isn't true and you need to do things like safely passing data back and forth between threads. Arenas doesn't help with that at all. So in those cases I think everyone would agree that Rust or Java or Go is much safer. So the difference between domains where the former or the latter dominates needs to be recognised, or there can't possibly be any mutual understanding. | | |
| ▲ | johncolanduoni 3 days ago | parent | next [-] | | If you're allocating most things from a set of arenas alive for the same scope, Rust's borrow checker complexity almost entirely fades away. You'll have one lifetime for all inputs and outputs of your functions, so the inferred lifetimes will always be correct. If you have multiple arenas being allocated from with different scopes, you're asking for trouble with the Zig model while the Rust borrow checker will keep which data is from which arena straight. | |
| ▲ | pjmlp 2 days ago | parent | prev | next [-] | | Basically Pascal's Mark() and Release() calls. http://www.3kranger.com/HP3000/mpeix/doc3k/B3150290023.10194... What is old is new again. | |
| ▲ | quotemstr 3 days ago | parent | prev [-] | | If arena allocators were a panacea, Subversion and Apache would be safer than your typical C program, yes? |
|
|
|
| ▲ | johncolanduoni 3 days ago | parent | prev | next [-] |
| Yeah, I often wonder if people who have this attitude have ever tried to run a non-trivial C program they wrote with the clang sanitizers on. A humbling experience every time. |
|
| ▲ | jmull 3 days ago | parent | prev | next [-] |
| I think the problem the practical programmer has with a statement like this is the implication that only certain languages require some basic understanding and a bit of discipline to avoid CVEs. Rust's model has a strict model that effectively prevents certain kinds of logic errors/bugs. So that's good (if you don't mind the price). But it doesn't address all kinds of other logic errors/bugs. It's like closing one door to the barn, but there are six more still wide open. I see rust as an incremental improvement over C, which comes at quite a hefty price. Something like zig is also an incremental improvement over C, which also comes at a price, but it looks like a significantly smaller one. (Anyway, I'm not sure zig is even the right comp for rust. There are various languages that provide memory safety, if that's your priority, which also generally allow dropping into "unsafe" -- typically C -- where performance is needed.) |
| |
| ▲ | estebank 3 days ago | parent | next [-] | | > But it doesn't address all kinds of other logic errors/bugs. It's like closing one door to the barn, but there are six more still wide open. Could you point at some language features that exist in other languages that Rust doesn't have that help with logic errors? Sum types + exhaustive pattern matching is one of the features that Rust does have that helps a lot to address logic errors. Immutability by default, syntactic salt on using globals, trait bounds, and explicit cloning of `Arc`s are things that also help address or highlight logic bugs. There are some high level bugs that the language doesn't protect you from, but I know of now language that would. Things like path traversal bugs, where passing in `../../secret` let's an attacker access file contents that weren't intended by the developer. The only feature that immediately comes to mind that Rust doesn't have that could help with correctness is constraining existing types, like specifying that an u8 value is only valid between 1 and 100. People are working on that feature under the name "pattern in types". | | |
| ▲ | jmull 3 days ago | parent [-] | | IMO, simplicity is the number one feature. The developer should spend their attention on the problem space as much as possible, and on the solution space as little as possible. There's a complexity cost to adding features, and while each one may make sense on its own, in aggregate they may collectively burden the developer with too much complexity. | | |
| ▲ | ViewTrick1002 2 days ago | parent [-] | | The question is, what is simplicity? Go tries to hide the issues, until a data loss happens because it has had trouble dealing with non-UTF8 filenames and Strings are by convention UTF8 but not truly and some functions expect UTF8 while others can work with any collection of bytes. https://blog.habets.se/2025/07/Go-is-still-not-good.html Or the Go time library which is a monster of special cases after they realized they needed monotonic clocks [1] but had to squeeze it into the existing API. https://pkg.go.dev/time Rust is on the other end of the spectrum. Explicit over implicit, but you can implicitly assume stuff works by panicking on these unexpected errors. Making the problem easy to fix if you stumble upon it after years of added cruft and changing requirements. [1]: https://github.com/golang/go/issues/12914 |
|
| |
| ▲ | dmytrish 3 days ago | parent | prev [-] | | Actually, the strong type system is often why people like to write Rust. Because encoding logic invariants in it also helps to prevent logic bugs! There is a significant crowd of people who don't necessarily love borrow checker, but traits/proper generic types/enums win them over Go/Python. But yes, it takes significant maturity to recognize and know how to use types properly. |
|
|
| ▲ | zwnow 3 days ago | parent | prev | next [-] |
| "Actually memory management is easy, you just have to...." - Every C programmer I've talked to No its not, if it was that easy C wouldn't have this many memory related issues... |
| |
| ▲ | r_lee 3 days ago | parent [-] | | It may be easy to do memory management, but it's not too easy to detect if you've made a fatal mistake when such mistakes won't cause apparent defects avoiding all memory management mistakes is not easy, and the bigger the codebase becomes, the more exponential the chance for disaster gets | | |
| ▲ | zwnow 3 days ago | parent [-] | | Absolutely, big factor is undefined behavior which makes it look like everything works. Until it doesn't. I quit C long ago because I don't want to deal with manual memory management in any language. I was overwhelmed by Zigs approach as well. Rust is pretty much the only language making it bearable to me. |
|
|
|
| ▲ | zem 3 days ago | parent | prev | next [-] |
| that is the precise point at which the article lost me. ironically it's often good programmers who don't "get" the benefit of building memory management and discipline into the language, rather than leaving it to be a cognitive burden on every programmer. |
|
| ▲ | naikrovek 3 days ago | parent | prev | next [-] |
| are you saying that such understanding isn't enough or that every C programmer who said that didn't understand those things? C and Zig aren't the same. I would wager that syntax differences between languages can help you see things in one language that are much harder to see in another. I'm not saying that Zig or C are good or bad for this, or that one is better than the other in terms of the ease of seeing memory problems with your eyes, I'm just saying that I would bet that there's some syntax that could be employed which make memory usage much more clear to the developer, instead of requiring that the developer keep track of these things in their mind. Even if you must manually annotate each function so that some metaprogram that runs at compile time can check that nothing is out of place could help detect memory leaks, I would think. or something; that's just an idea. There's a whole metaprogramming world of possibilities here that Zig allows that C simply doesn't. I think there's a lot of room for tooling like this to detect problems without forcing you to contort yourself into strange shapes simply to make the compiler happy. |
| |
| ▲ | kstenerud 3 days ago | parent | next [-] | | > are you saying that such understanding isn't enough or that every C programmer who said that didn't understand those things? Probably both. They're words of hubris. C and Zig give the appearance of practicality because they allow you to take shortcuts under the assumption that you know what you're doing, whereas Rust does not; it forces you to confront the edge cases in terms of ownership and provenance and lifetime and even some aspects of concurrency right away, and won't compile until you've handled them all. And it's VERY frustrating when you're first starting because it can feel so needlessly bureaucratic. But then after awhile it clicks: Ownership is HARD. Lifetimes are HARD. And suddenly when going back to C and friends, you find yourself thinking about these things at the design phase rather than at the debugging phase - and write better, safer code because of it. And then when you go back to Rust again, you breathe a sigh of relief because you know that these insidious things are impossible to screw up. | |
| ▲ | capitol_ 3 days ago | parent | prev [-] | | Just understanding the rules are not enough, you also need to be consistently good so that you never make a mistake that gets into production. On both your average days and your bad days. Over the 40 to 50 years that your carer lasts. I guess those kind of developers exist, but I know that I'm not one of them. | | |
| ▲ | naikrovek a day ago | parent [-] | | I am pretty confident that a language with syntax that allows you to feel that freedom that C gives you AND is safe to write software with (without garbage collection) is possible, we just need to come up with a reasonable syntax that has both of those features. It won't look like C or Go or any of that, I don't think. I am not a computer scientist (I have no degree in CS) but it sure seems like it would be possible to determine statically if a reference could be misused in code as written without requiring that you be the Rust Borrow Checker, if the language was designed with those kinds of things from the beginning. | | |
| ▲ | capitol_ a day ago | parent [-] | | Everything you wrote sounds like Rust, except that you didn't want it to be Rust :). |
|
|
|
|
| ▲ | markphip 3 days ago | parent | prev [-] |
| Came here to add the same comment. Had it on my clipboard already to post. You said it better |