| ▲ | simonask 3 days ago |
| The benefit of Zig seems to be that it allows you to keep thinking like a C programmer. That may be great, but to a certain extent it’s also just a question of habit. Seasoned Rust coders don’t spend time fighting the borrow checker - their code is already written in a way that just works. Once you’ve been using Rust for a while, you don’t have to “restructure” your code to please the borrow checker, because you’ve already thought about “oh, these two variables need to be mutated concurrently, so I’ll store them separately”. The “object soup” is a particular approach that won’t work well in Rust, but it’s not a fundamentally easier approach than the alternatives, outside of familiarity. |
|
| ▲ | mattwilsonn888 3 days ago | parent | next [-] |
| As long as the audience accepts the framing that ergonomics doesn't matter because it can't be quantified, the hand-waving exemplified above will confound. "This chair is guaranteed not to collapse out from under you. It might be a little less comfortable and a little heavier, but most athletic people get used to that and don't even notice!" Let's quote the article: > I’d say as it currently stands Rust has poor developer ergonomics but produces memory safe software, whereas Zig has good developer ergonomics and allows me to produce memory safe software with a bit of discipline. The Rust community should be upfront about this tradeoff - it's a universal tradeoff, that is: Safety is less ergonomic. It's true when you ride a skateboard with a helmet on, it's true when you program, it's true for sex. Instead you see a lot of arguments with anecdotal or indeterminate language. "Most people [that I talk to] don't seem to have much trouble unless they're less experienced." It's an amazing piece of rhetoric. In one sentence the ergonomic argument has been dismissed by denying subjectivity exists or matters and then implying that those who disagree are stupid. |
| |
| ▲ | jasonpeacock 3 days ago | parent | next [-] | | > produce memory safe software with a bit of discipline "a bit of discipline" is doing a lot of work here. "Just don't write (memory) bugs!" hasn't produced (memory) safe C, and they've been trying for 50yrs. The best practices have been to bolt on analyzers and strict "best practice" standards to enforce what should be part of the language. You're either writing in Rust, or you're writing in something else + using extra tools to try and achieve the same result as Rust. | | |
| ▲ | fpoling 3 days ago | parent | next [-] | | As Rust Zig has type-safe enums/sum types. That alone eliminates a lot of problems with C. Plus sane error handling with good defaults that are better than Rust also contributes to code with less bugs. Surely there is no borrow checker, but a lot of memory-safety issues with C and C++ comes from lack of good containers with sane interfaces (std::* in C++ is just bad from memory safety point of view). If C++ gained the proper sum types, error handling and templates in Zig style 15 years ago and not the insanity that is in modern C++ Rust may not exist or be much more niche at this point. | | |
| ▲ | TuxSH 2 days ago | parent | next [-] | | > If C++ gained the proper sum types AFAIK "P2688 R5 Pattern Matching: match Expression" exists and is due C++29 (what actually matters is when it's accepted and implemented by compilers anyway) Also, cheap bound checks (in Rust) are contingent to Rust's aliasing model. | |
| ▲ | whatevaa 2 days ago | parent | prev [-] | | Buffer overruns are most common memory related RCE's. So bounds checking arrays/strings BY DEFAULT is needed. |
| |
| ▲ | tptacek 3 days ago | parent | prev | next [-] | | I actively dislike Zig's memory safety story, but this isn't a real argument until you can start showing real vulnerabilities --- not models --- that exploit the gap in rigor between the two languages. Both Zig and Rust are a step function in safety past C; it is not a given that Rust is that from Zig, or that that next step matters in practice the way the one from C does. | | |
| ▲ | dadrian 3 days ago | parent | next [-] | | I like Zig, although the Bun Github tracker is full of segfaults in Zig that are presumably quite exploitable. Unclear what to draw from this, though. [1]: https://github.com/oven-sh/bun/issues?q=is%3Aissue%20state%3... | | |
| ▲ | vanderZwan 3 days ago | parent [-] | | Wasn't Bun the project where the creator once tweeted something along the lines of "if you're not willing to work 50+ hours a week don't bother applying to my team"? Because if so then I'm not surprised and also don't think Zig is really to blame for that. | | |
| ▲ | dadrian 3 days ago | parent [-] | | Not clear to me there's a correlation between hours worked and number of memory safety vulnerabilities | | |
| ▲ | blacksmith_tb 3 days ago | parent [-] | | I think the implication is something like "overwork / fraying morale from long hours means shipping more bugs". | | |
| ▲ | tptacek 3 days ago | parent [-] | | The point of memory-safe languages is to foreclose on a set of particularly nasty bugs, regardless of how frayed engineer morale is. | | |
| ▲ | vanderZwan 2 days ago | parent [-] | | I'm pretty sure that in an overworked environment the engineers would reach for Rust's unsafe mode pretty quickly because they're too tired to make sense of the borrow checker. | | |
| ▲ | timschmidt 2 days ago | parent | next [-] | | I'm no expert, but I've been hacking in Rust for several years now, and the only unsafe I've written was required as part of building a safe interface over some hardware peripherals. Exactly as intended. The borrow checker is something new Rust devs struggle with for a couple months, as they learn, then the rules are internalized and the code gets written just like any other language. I think new devs only struggle with the borrow checker because everyone has internalized the C memory model for the last 50 years. In another 50, everyone will be unlearning Rust for whatever replaces it. | |
| ▲ | dadrian 2 days ago | parent | prev [-] | | Web browsers and operating systems are full of memory safety bugs, and are not written by engineers in crunch these days. |
|
|
|
|
|
| |
| ▲ | fuzztester 3 days ago | parent | prev | next [-] | | >I actively dislike Zig's memory safety story Why? Interested to know. Just for background, I have not tried out either Zig or Rust yet, although I have been interestedly reading about both of them for a while now, on HN and other places, and also in videos, and have read some of the overview and docs of both. But I have a long background in C dev earlier. And I have been checking out C-like languages for a while such as Odin, Hare, C3, etc. | |
| ▲ | pjmlp 2 days ago | parent | prev [-] | | Modula-2 was already a step function in safety past C, but people did not care because it wasn't given away alongside UNIX. |
| |
| ▲ | rixed 3 days ago | parent | prev [-] | | > "Just don't write (memory) bugs!" hasn't produced (memory) safe C Yes it did, of course. Maybe it takes years of practice, the assistance of tools (there are many, most very good), but it's always been possible to write memory safe large C programs. Sure, it's easier to write a robust program in almost every other language. But to state that nobody ever produced a memory safe C program is just wrong. Maybe it was just rethoric for you, but I'm afraid some may read that and think it's a well established fact. | | |
| ▲ | zanellato19 3 days ago | parent [-] | | >Yes it did, of course. Maybe it takes years of practice, the assistance of tools (there are many, most very good), but it's always been possible to write memory safe large C programs. Can you provide examples for it? Because it honestly doesn't seem like it has ever been done. | | |
| ▲ | rixed 2 days ago | parent | next [-] | | I don't understand where you stand. Surely, you don't mean that all C programs have memory bugs. But on my side, I'm not claiming that discipline makes C a memory safe language either. This discussion has taken a weird turn. | | |
| ▲ | ramblerman 2 days ago | parent [-] | | > you don't mean that all C programs have memory bugs Well all of them "potentially" do, which is enough from a security standpoint There have been enough zero days using memory leaks that we know the percentage is also non trivial. So yes, if programmers can write bugs they will, google SREs were the first to famously measure bugs per release as a metric instead of the old fashioned (and naive) "we aren't gonna write any more bugs" |
| |
| ▲ | 6P58r3MXJSLi 2 days ago | parent | prev | next [-] | | postfix sqlite billions of installations and relatively few incidents | | |
| ▲ | zelphirkalt 2 days ago | parent [-] | | Few incidents != memory safe Few incidents != not badly exploitable Few incidents != no more undiscovered safety bugs/issues I don't think your examples quite cut it. |
| |
| ▲ | dayvster 3 days ago | parent | prev [-] | | [flagged] | | |
| ▲ | hannofcart 3 days ago | parent | next [-] | | Haven't written C in a while but I think this program has an integer overflow error when you input 2 really large integers such that the sum is more than a 32 bit signed integer. Also I believe in entering null values will lead to undefined behaviour. | | |
| ▲ | Karrot_Kream 3 days ago | parent | next [-] | | Memory safe doesn't mean protection from integer overflow unless you use that integer to index into some array. I'm not sure how you'd enter NULL given scanf. | | |
| ▲ | Voultapher 3 days ago | parent [-] | | I'm not sure how showing that gp can't even write a dozen lines of memory safe C proves that doing so for the exponentially harder 100+k LoC projects is feasible. The program contains potential use of uninitialized memory UB, because scanf error return is not checked and num1 and num2 are not default initialized. And a + b can invoke signed integer overflow UB. A program with more than zero UB cannot be considered memory safe. For example if the program runs in a context where stdin can't be read scanf will return error codes and leave the memory uninitialized. | | |
| ▲ | Karrot_Kream 2 days ago | parent [-] | | > num1 and num2 are not default initialized num1 and num2 are declared on the stack and not the heap. The lifetimes of the variables are scoped to the function and so they are initialized. Their actual values are implementation-specific ("undefined behavior") but there is no uninitialized memory. > And a + b can invoke signed integer overflow UB. A program with more than zero UB cannot be considered memory safe. No, memory safety is not undefined behavior. In fact Rust also silently allows signed integer overflow. Remember, the reason memory safety is important is because it allows for untrusted code execution. Importantly here, even if you ignore scanf errors and integer overflow, this program accesses no memory that is not stack local. Now if one of these variables was cast into a pointer and used to index into a non-bounds-checked array then yes that would be memory unsafety. But the bigger code smell there is to cast an index into a pointer without doing any bounds checking. That's sort of what storing indexes separately from references in a lot of Rust structures is doing inadvertently. It's validating accesses into a structure. | | |
| ▲ | Voultapher 2 days ago | parent [-] | | Regarding initialization, if one wants portable code that works for more than one machine+compiler version, it's advisable to program against the C++ virtual machine specified in the standard. This virtual machine does not contain a stack or heap. Generally your comment strikes me as assuming that UB is some kind of error. In practice UB is more a promise the programmer made to never do certain things, allowing the compiler to assume that these things never happen. How UB manifests is undefined. A program that has more than zero UB cannot be assumed to be memory safe because we can't make any general assumptions about its behavior because. UB is not specified to be localized it can manifest in any way, rendering all assumptions about the program moot. In practice when focusing on specific compilers and machines we can make reasonable localized assumptions, but these are always subject to change with every new compiler version. Memory safety is certainly critical when it comes to exploits, but even in a setting without adversaries it's absolutely crucial for reliability and portability. > In fact Rust also silently allows signed integer overflow. Silently for release builds, and panic in debug builds. The behavior is implementation defined and not undefined, in practice this is a subtle but crucial difference. Take this example https://cpp.godbolt.org/z/58hnsM3Ge the only kind of UB AFAIKT is signed integer overflow, and yet we get an out-of-bounds access. If instead the behavior was implementation defined the check for overflow would not have been elided. |
|
|
| |
| ▲ | dayvster 3 days ago | parent | prev [-] | | har har... have my upvote! |
| |
| ▲ | zanellato19 3 days ago | parent | prev [-] | | I wasn't trying to be a dick, I am saying that my experience is that no big C program is ever safe. You replied that it is possible and I asked for an example. Providing a small script to prove that big C programs are safe isn't enough. | | |
| ▲ | dayvster 3 days ago | parent [-] | | Making a broad statement like there has never been a memory safe C program is a bit of a dickish thing to say. especially when you phrase it as > Can you provide examples for it? Because it honestly doesn't seem like it has ever been done. it comes off as pedantic and arrogant. It obviously is possible to write memory safe software in C and obviously it has been done before otherwise we would not be currently communicating over the goddamn internet. Asking for evidence of something this obvious is akin to asking for a source on if water is in fact wet. | | |
| ▲ | zanellato19 3 days ago | parent | next [-] | | I think pretty much any non trivial C example has memory safety issues. It doesn't mean that they aren't useful and can't be used. But time and time again we have seen security reports that point to memory issues. So no, I don't think I'm asking for something obvious, quite the contrary. I think the claim that it's possible to write big C programs that are memory safe is really strong and I heavily disagree with it. | |
| ▲ | zelphirkalt 2 days ago | parent | prev | next [-] | | It's not dickish, and it's weird you seem to feel attacked/offended by that. It is a realistic conclusion, that we have come to over the course of decades of C usage. One could call it wisdom or collective learning. | |
| ▲ | ksec 3 days ago | parent | prev [-] | | >> Can you provide examples for it? Because it honestly doesn't seem like it has ever been done. >it comes off as pedantic and arrogant. Interesting the way this was perceived. I thought he was just asking a honest question. Again shows online discussion and communication is hard. |
|
|
|
|
|
| |
| ▲ | anon-3988 3 days ago | parent | prev | next [-] | | I would argue that good C or C++ code is actually just Rust code with extra steps. So in this sense, Rust gets you to the "desired result" much easier compared to using C or C++ because no one is there to enforce anything and make you do it. You can argue that using C or C++ can get you to 80% of the way but most people don't actively think "okay, how do I REALLY mess up this program?" and fix all the various invariant that they forgot to handle. Even worse, this issue is endemic in higher level dynamic languages like Python too. Most people most of the time only think about the happy path. | | |
| ▲ | wolvesechoes 3 days ago | parent | next [-] | | There are valid and safe programs rejected by Rust compiler if you don't go through rituals required to please it (slaping Rc, Refcells etc). No amount of "oh your Rust code is actually something you would ended up with if you would choose C" will change that. | | |
| ▲ | simonask 3 days ago | parent | next [-] | | You know, the `unsafe` keyword exists. You’re allowed to use it. If your algorithm is truly safe, and as you say, there are many safe things the borrow checker cannot verify, that’s exactly what `unsafe` is for. Ideally you can also design a safe API around it using the appropriate language primitives to model the abstraction, but there’s no requirement. | |
| ▲ | crote 3 days ago | parent | prev [-] | | Of course. After all, it is mathematically impossible to prove the correctness of all valid and safe programs - the halting problem clearly shows that. The real question should not be "Are there valid and safe programs Rust will reject?" but "How common is it that your well-written valid and safe program will be rejected by the Rust compiler?". In practice the vast majority will be accepted, and what remains is stuff the Rust compiler cannot prove to be correct. If Rust doesn't like your code, there are two solutions. The first is to go through the rituals to rewrite it as provably-safe code - which can indeed feel a bit tedious if your code was designed using principles which don't map well to Rust. The second is to use `unsafe` blocks - but that means proving its safety is up to the programmer. But as we've historically seen with C and unsafe-heavy Rust code bases, programmers are horrible at proving safety, so your mileage may vary. I don't want to be the "you're holding it wrong" person, but "Rust rejected my valid and safe program" more often than not means "there's a subtle bug in my program I am not aware of yet". The Rust borrow checker has matured a lot since its initial release, and it doesn't have any trouble handling the low-hanging fruit anymore. What's left is mainly complex and hard-to-reason-about stuff, and that's exactly the kind of code humans struggle with as well. |
| |
| ▲ | oconnor663 3 days ago | parent | prev | next [-] | | I think "writing Rust in C++" so to speak means at least two distinctly different things, which are both important. The first thing is that, as an individual programmer, you're being disciplined with memory and thinking about who owns what. The second thing is that as a group of programmers (over time), you all agree about who owns what. There are a lot of ways to learn the first thing, but I'm not sure there are a lot of ways to accomplish the second thing in a large system. | |
| ▲ | vanderZwan 3 days ago | parent | prev [-] | | What does that have to do with Zig though? |
| |
| ▲ | rstuart4133 3 days ago | parent | prev | next [-] | | > The Rust community should be upfront about this tradeoff - it's a universal tradeoff, that is: Safety is less ergonomic. That can be true for small programs. Not always, because Rust's type system makes for programs that can be every bit as compact as Python if the algorithm doesn't interact badly with the borrow checker. Or even if it does. For example this tutorial converts a fairly nary C program to Rust: https://cliffle.com/p/dangerust/ The C was 234 lines, the finished memory safe Rust 198 lines. But when it comes to large programs, the ergonomics strangely tips into reverse. By "strangely tips into reverse", I mean yes it takes more tokens and thinking to produce a working Rust program, but overall it saves time. Here a "large program" means a programmer can't fit it all in his head at one time. I think Andrew Huang summed the effect up best, when he said if you start pulling on a thread in a Rust program, you always get to the end. In other languages, you often just make the knot tighter. | |
| ▲ | nicoburns 3 days ago | parent | prev | next [-] | | > The Rust community should be upfront about this tradeoff - it's a universal tradeoff, that is: Safety is less ergonomic. I'd agree with that if the comparison is JavaScript or Python. If the comparison is Zig (or C or C++) then I don't agree that it's universal. I personally find Rust more ergonomic than those languages (in addition to be being safer). | |
| ▲ | thinkharderdev 3 days ago | parent | prev | next [-] | | > As long as the audience accepts the framing that ergonomics doesn't matter because it can't be quantified, the hand-waving exemplified above will confound. I interpreted the parent to be saying that ergonomics IS (at least partly) subjective. The subjective aspect is "what you are used to". And once you get used to Rust its ergonomics are fine, something I agree with having used Rust for a few years now. > The Rust community should be upfront about this tradeoff I think they are. But more to the point, I think that safety is not really something you can reasonably "trade-off", at least not for non-toy software. And I think that because I don't really see C/C++/Zig people saying "we're trading off safety for developer productivity/performance/etc". I see them saying "we can write safe code in an unsafe language by being really careful and having a good process". Maybe they're right, but I'm skeptical based on the never-ending proliferation of memory safety issues in C/C++ code. | | |
| ▲ | mattwilsonn888 3 days ago | parent [-] | | I think you are clearly good-faith. The issue is the underlying and unfair assumption that is so common in these debates: that the memory-unsafe language we're comparing against Rust is always C/C++, rather than a modern approach like Zig or Odin (which will share many arguments against C/C++). You can prove to yourself this happens by looking around this thread! The topic is Zig vs. Rust and just look at how many pro-Rust arguments mention C (including yours). It's a strong argument if we pose C as the opponent, because C can be so un-ergonomic that even Rust with its added constraints competes on that aspect. But compare it to something like Zig or Odin (which has ergonomic and safety features like passing allocators to any and all functions, bounds checking by default, sane slice semantics which preclude the need for pointer arithmetic) and the ergonomics/safety argument isn't so simple. | | |
| ▲ | tialaramex 3 days ago | parent | next [-] | | The ergonomics of Odin are ghastly, it's all special cases all the time. For example, in Rust when we write `for n in 0..10 {` that 0..10 is a Range, we can make one of those, we can store one in a variable, Range is a type. In Odin `for i in 0..<10 {` is uh, magic, we can't have a 0..<10, it's just syntax for the loop. in Rust we can `for puppy in litter {` and litter - whatever type that is - just has to implement IntoIterator, the trait for things which know how to be iterated, and they iterate over whatever that iterator does. In Odin only specific built-in types are suitable, and they do... whatever seemed reasonable to Bill. You can't provide this for your own type, it's a second class citizen and isn't given the same privileges as Odin's built-in types. If you're Ginger Bill, Odin is great, it does exactly what you expected and it covers everything you care about but nothing more. | | |
| ▲ | mattwilsonn888 2 days ago | parent [-] | | It's called simplicity. Not every single semantic element in the language needs to be a type. `for i in 0..<10` This isn't "magic," it's a loop that initializes a value `i` and checks against it. It's a lot less "magic" than Rust. The iterable types in Odin are slices and arrays - that is hardly arbitrary like you imply. The type system in Rust is mostly useful for its static guarantees. Using it for type-gymnastics and unnecessary abstractions is ugly and performative. Tasks in Odin can be accomplished with simplicity. The C++ misdirection and unbounded type abstractions are simply not appreciated by many. If you want a language with no special cases that is 100% fully abstract then program in a Turing machine. I'll take the language designed to make computers perform actions over a language having an identity crisis with mathematics research, all else equal. Unless I'm doing math research of course - Haskell can be quite fun! Ginger Bill is a PhD physicist as well -- not that education confers wisdom -- but I don't bet his design choices are coming from a resentment of math or abstraction. Absolute generality isn't the boon you think it is. | | |
| ▲ | tialaramex 2 days ago | parent [-] | | The "simplicity" you're so happy about means you only get whatever it is Bill made. If that doesn't work for Bill he'll fix it, but if it doesn't work for you (and at scale, it won't) too bad. Far from just the two special cases you listed I count five, Bill has found need for iterating over: Both built-in array types, strings (but not cstrings), slices and maps. `for a, b in c { ... }` makes a the key and b the value if c is a map, but if c were instead an array then a is the value and b is an index. Presumably both these ideas were useful to Bill and the lack of coherence didn't bother him. Maps are a particularly interesting choice of built-in. We could argue about exactly how a built-in string type should best work, or the growth policy for a mediocre growable array type - and Odin offers two approaches for strings (though not with the same support), but for maps you clearly need implementation options. and instead in the name of "simplicity" Bill locks you into his choice. You're stuck with Bill's choice of hash, Bill's layout, and Bill's semantics. If you want your own "hash table" type that's not map yours will have even worse ergonomics and you can't fix it. Yours can't be iterated with a for loop, it can't be special case initialized even if your users enabled that for the built-in type, and of course all the familiar functions and function groups don't work for your type. I don't have a use for a "language designed to make computers perform actions" when it lacks the fine control needed to get the best out of the machine but insists I must put in all the hard work anyway. |
|
| |
| ▲ | simonask 3 days ago | parent | prev | next [-] | | Zig is an immense improvement, but it’s not a production language at the time of writing. Not a lot of people feel qualified to actually compare the two. At the same time, I will argue that Zig’s improvements over C are much less substantial compared to something like Rust. It’s great, but not a paradigm shift. | |
| ▲ | ksec 3 days ago | parent | prev [-] | | Thank You that makes a lot of sense. I guess we will have to wait for Zig to become 1.0 first and then do a proper comparison. |
|
| |
| ▲ | weinzierl 3 days ago | parent | prev | next [-] | | "It's true when you ride a skateboard with a helmet on." Rust is not the helmet. It is not a safety net that only gives you a benefit in rare catastrophic events. Rust is your lane assist. It relieves you from the burden of constant vigilance. A C or C++ programmer that doesn't feel relief when writing Rust has never acquired the mindset that is required to produce safe, secure and reliable code. | | |
| ▲ | omnicognate 3 days ago | parent | next [-] | | Public Safety Announcement: Lane assist does not relieve you from the burden of constant vigilance. | |
| ▲ | travisgriggs 3 days ago | parent | prev | next [-] | | > Rust is your lane assist. It relieves you from the burden of constant vigilance. Interesting analogy. I love lane assist. When I love it. And hate it when it gets in the way. It can actively jerk the car in weird and surprising ways when presented with things it doesn’t cope well with. So I manage when it’s active very proactively. Rust of course has unsafe… but… to keep the analogy, that would be like driving in a peer group where everyone was always asking me if I had my lane assist on, where when I arrived at a destination, I was badgered with “did you do the whole drive with lane assist?” And if I didn’t, I’d have explained to me the routes and techniques I could have used to arrive at my destination using used lane assist the whole way. Disclaimer, I have only dabbled a little with rust. It is the religion behind and around it that I struggle with, not the borrow checker. | | |
| ▲ | danudey 3 days ago | parent [-] | | I have also mostly only dabbled with Rust, and I've come to the conclusion that it is a fantastic language for a lot of things but it is very unforgiving. The optimal way to write Python is to have your code properly structured, but you can just puke a bunch of syntax into a .py file and it'll still run. You can experiment with a file that consists entirely of "print('Hello World')" and go from there. Import a json file with `json.load(open(filename))` and boom. Rust, meanwhile, will not let you do this. It requires you to write a lot of best-practice stuff from the start. Loading a JSON file in a function? That function owns that new data structure, you can't just keep it around. You want to keep it around? Okay, you need to do all this work. What's that? Now you need to specify a lifetime for the variable? What does that mean? How do I do that? What do I decide? This makes Rust feel much less approachable and I think gives people a worse impression of it at the start when they start being told that they're doing it wrong - even though, from an objective memory-safety perspective, they are, it's still frustrating when you feel as though you have to learn everything to do anything. Especially in the context of the small programs you write when you're learning a language. I don't care about the 'lifetime' of this data structure if the program I'm writing is only going to run for 350ms. As I've toiled a bit more with Rust on small projects (mine or others') I feel the negative impacts of the language's restrictions far more than I feel the positive impacts, but it is nice to know that my small "download a URL from the internet" tool isn't going to suffer from a memory safety bug and rootkit my laptop because of a maliciously crafted URL. I'm sure it has lots of other bugs waiting to be found, but at least it's not those ones. | | |
| ▲ | fpoling 3 days ago | parent | next [-] | | Rust is very forgiving if the goal is not the absolutely best performance. One can rewrite Python code into Rust mostly automatically and the end result is not bad. Recent LLMs can do it without complex prompting. The only problem is the code would be littered with Rc<RefCell<Foo>>. If Rust would have a compact notation for that a lot of pain related to fighting the borrow checker just to avoid the above would be eliminated. | | |
| ▲ | wffurr 3 days ago | parent [-] | | >> If Rust would have a compact notation for "Rc<RefCell<Foo>>" That sounds like Rhai or one of the other Rust-alike scripting languages. |
| |
| ▲ | yen223 3 days ago | parent | prev [-] | | Someone (Rich Hickey?) described this as a piano that doesn't make a sound until you play the piece perfectly, and that analogy has stuck with me since. | | |
| ▲ | timschmidt 2 days ago | parent [-] | | Much preferred over pianos which are making unwanted sounds, unexpected sounds, loud crashing sounds, sounds initiated by sheet music which exploits a flaw in the piano's construction, and can therefore not be depended upon to sound appropriately during important events like concerts. That said, I'm all the time noodling new small programs in Rust. Cargo and crates.io makes this far simpler than with C/C++ where I have to write some code in another language entirely like [C]Make to get the thing to build. And I find that the borrow checker and rustc's helpful errors create a sort of ladder where all I have to do to get a working program is fix the errors the compiler identifies. And it often tells he how. Once the errors are fixed one by one, which is easy enough, and the software builds, my experience is that I get the expected program behavior about 95% of the time. I cannot say the same for other languages. |
|
|
| |
| ▲ | 6P58r3MXJSLi 2 days ago | parent | prev | next [-] | | > It relieves you from the burden of constant vigilance Is it..? Rust is more like your parents when you are a kid: don't do that, don't do that either! see? you wanted to go out to play with your friends and now you have a bruised knee. What did I told you? Now go to your room and stay there! | |
| ▲ | mattwilsonn888 3 days ago | parent | prev | next [-] | | No. It is not an invisible safeguard - it yaps and significantly increases compile time and (a matter of great debate) development effort. It is a helmet, just accept it. Helmets are useful. | | |
| ▲ | steveklabnik 3 days ago | parent | next [-] | | The borrow checker is never a significant portion of compile times. | | |
| ▲ | mattwilsonn888 3 days ago | parent [-] | | This is incredibly misleading (technically true maybe) and you know it. Rust has slower compile times for the sake of safety, it's a tradeoff you shouldn't be ashamed of. I didn't narrowly claim the borrow checker (as opposed to the type system or other static analysis) was the sole focus of the tradeoff. | | |
| ▲ | shakow 3 days ago | parent [-] | | > Rust has slower compile times That's true. > for the sake of safety, That's false though. All deep dives in the topic find that the core issue is the sheer amount of unoptimized IR that is thrown at LLVM, especially due to the pervasive monomorphization of everything. | | |
| ▲ | mattwilsonn888 3 days ago | parent [-] | | Well it is arguably Rust's worst issue and it has remained it for most of its life. Are you really going to try and convince people that this is completely incidental and not a result of pursuing its robust static contracts? How pedantic should we about about it? | | |
| ▲ | burntsushi 2 days ago | parent | next [-] | | On the one hand, you talk about being upfront and honest about trade-offs. On the other, you yourself are being less that credible by phrasing wild speculation as if they are facts. So... do I as I say, not as I do? | |
| ▲ | steveklabnik 3 days ago | parent | prev | next [-] | | Yes. This is borne out by the numbers. It has nothing to do with being pedantic, it’s basic facts. | |
| ▲ | purplesyringa 3 days ago | parent | prev | next [-] | | It's not about static contracts at all, it's about keeping performance of high-level APIs high. It's all just about templates and generics, as far as I'm aware -- the same problem that plagues C++, except that it's worse in Rust because it's more ergonomic to expose templates in public library APIs in Rust than C++. Well, and also the trait solver might be quite slow, but again, it has nothing to do with memory safety. | |
| ▲ | shakow 3 days ago | parent | prev [-] | | > Are you really going to try and convince people that this is completely incidental and not a result of pursuing its robust static contracts? I am, because that's what all the people that explored the question converged on. Now if you have other evidences to bring to the debate, feel free to – otherwise, please stop spreading FUD and/or incompetence. |
|
|
|
| |
| ▲ | rowanG077 3 days ago | parent | prev [-] | | It is a helmet. But at least it's a helmet in situations where you get into brain cracking accidents multiple times a day. In the end the helmet allows you to get back up and continue your journey compared to when you had no helmet. | | |
| ▲ | mattwilsonn888 3 days ago | parent [-] | | We're talking about Zig not C. Same argument will apply to Odin. These modern approaches are not languages that result in constant memory-safety issues like you imply. | | |
| ▲ | pjmlp 2 days ago | parent [-] | | Modern as in 1978 and Modula-2 was just made available. Or better yet, modern as 1961 Burroughs released ESPOL/NEWP and C was a decade away to be invented. |
|
|
| |
| ▲ | nsagent 3 days ago | parent | prev [-] | | I don't think this is any better of an argument. Maybe yours is a more apt analogy, but as a very competent driver I can't tell you how often lane assist has driven me crazy. If I could simply rely on it in all situations, then it would be fine. It's the death of a thousand cuts each and every time it behaves less than ideally that gets to me and I've had to turn it off in every single car a I've driven that has it. |
| |
| ▲ | aw1621107 3 days ago | parent | prev | next [-] | | > it's a universal tradeoff, that is: Safety is less ergonomic. I'm not sure that that tradeoff is quite so universal. GC'd languages (or even GC'd implementations like Fil-C) are equally or even more memory-safe than Rust but aren't necessarily any less ergonomic. If anything, it's not an uncommon position that GC'd languages are more ergonomic since they don't forbid some useful patterns that are difficult or impossible to express in safe Rust. | | |
| ▲ | nextaccountic 2 days ago | parent | next [-] | | GCed languages aren't (usually) more memory safe than safe Rust because they (usually) lack the equivalent to the Send and Sync traits and thus will not prevent unsynchronized access to the same from multiple threads, including data races. Some languages might define data races to not be UB (at least Java and OCaml do this, but Go famously doesn't), but even in those languages data races may produce garbage data due to tearing (writing to a large struct needs to be done in many steps, and without proper synchronization two concurrent updates may interleave leaving data structures in an inconsistent state) Guaranteed thread safety is huge. I hope more high level, GC languages use Rust's approach of defining an interface for types that can be safely sent and/or shared across threads | | |
| ▲ | aw1621107 2 days ago | parent [-] | | > GCed languages aren't (usually) more memory safe than safe Rust because they (usually) lack the equivalent to the Send and Sync traits and thus will not prevent unsynchronized access to the same from multiple threads, including data races. To be honest that particular aspect of Rust's memory safety story slipped my mind when I wrote that comment. I was thinking of Java's upcoming (?) disabling-by-default of sun.misc.Unsafe which allows you to ensure that you have no unsafe code at all in your program and your dependency tree outside of the JVM itself. To be fair, that's admittedly not quite the same level of memory safety improvement over Rust as Rust/Java/C#/etc. over C/C++/etc., but I felt it's a nice guarantee to have available. > Guaranteed thread safety is huge. I totally agree! It's not the only way to get memory safety, but I definitely lean towards that approach over defining data races to be safe. |
| |
| ▲ | SkiFire13 3 days ago | parent | prev | next [-] | | The tradeoff is between performance, safety and ergonomics. With GC languages you lose the first one. | | |
| ▲ | quotemstr 3 days ago | parent [-] | | > The tradeoff is between performance, safety and ergonomics. With GC languages you lose the first one. That's a myth that just won't die. How is it that people simultaneously believe 1) GC makes a language slow, and 2) Go is fast? Go's also isn't the only safe GC. There are plenty of good options out there. You are unlikely to encounter a performance issue using one of these languages that you could resolve only with manual memory management. | | |
| ▲ | shakow 3 days ago | parent | next [-] | | Go is not fast though. It's plenty fast compared to commonly used languages such as JS, PHP or Python, but can easily be let in the dust by Java and C#, which arguably play in the same court. And AOT-compiled, no GC languages like C++, Rust or Zig just run circles around it. | | |
| ▲ | pjmlp 2 days ago | parent [-] | | Not quite true if using GCC Go as compiler, however they seem to have dropped development after Go got generics. You are comparing quality of implementation, not languages. | | |
| ▲ | shakow 2 days ago | parent [-] | | > You are comparing quality of implementation, not languages. But comparing languages in a vacuum has 0 value. Maybe some alien entity will use physic transcending time and space to make TCL the fastest language ever, but right now I won't be writing heavy data-processing code in it. | | |
| ▲ | pjmlp 2 days ago | parent [-] | | At the same time, comparing without acknowledging that it is an implemenation issue, it is also not being fully honest. For example, comparing languages with LLVM based implementations, usually if the machine code isn't the same, reveals that they aren't pushing the same LLVM IR down the pipe, and has little value for what the grammar actually looks like. | | |
| ▲ | shakow 2 days ago | parent [-] | | > comparing without acknowledging that it is an implemenation issue Because that's implicit at this point – I'm not going to prefix with “because Earth geometry is approximately Euclidian at our scale” every time I'm telling a tourist to go straight ahead for 300m to their bus station. Just like when people say “C++ is fast”, of course they refer to clang/g++/msvc, not some educational university compiler. | | |
| ▲ | pjmlp 2 days ago | parent [-] | | I seriously doubt it, given how many discussions go on HN or similar sites, revealing a complete lack of knowledge in compiler design. Of course the authors of many of such comments aren't to blame, they only know what they know, hence why https://xkcd.com/386/ |
|
|
|
|
| |
| ▲ | cheshire_cat 3 days ago | parent | prev | next [-] | | Go is fast compared to Python or Ruby. Go is not fast compared to C. I think people that talk about GC'd languages being slow are usually not building Rails or Django apps in their day to day. | | |
| ▲ | on_the_beach 3 days ago | parent [-] | | Not a Go programmer I'm guessing. Go can be made to run much faster than C. Especially when the legacy C code is complex and
thus single threaded, Go's fabulous multicore
support means you can be exploiting parallelism
and finishing jobs faster, with far less effort
than it would take to do it in C. If you measure performance per developer day
invested in writing the Go, Go usually wins
by a wide margin. | | |
| ▲ | quotemstr 3 days ago | parent [-] | | > Go can be made to run much faster than C. Not literally the case. > If you measure performance per developer day invested in writing the Go, Go usually wins by a wide margin. I can accept that performance/hour-spent is better in Go than C, but that's different from Go's performance ceiling being higher than C's. People often confuse ceilings with effort curves. |
|
| |
| ▲ | somanyphotons 2 days ago | parent | prev | next [-] | | Go is middle-pack fast, not fast-fast. There are always going to be problem sets where the GC causes significant slowdown. | |
| ▲ | rixed 3 days ago | parent | prev [-] | | > How is it that people simultaneously believe
> 1) GC makes a language slow, and
> 2) Go is fast? Easy one: either not the same people, or people holding self contradicting thoughts. GC are slow not only because of scanning the memory but also because of the boxing. In my experience, 2 to 3 times slower. Still a better tradeoff in the vast majority of cases over manual memory management. A GC is well worth the peace of mind. | | |
| ▲ | quotemstr 3 days ago | parent [-] | | > also because of the boxing Not every GC boxes primitives. Most don't. | | |
| ▲ | rixed 3 days ago | parent [-] | | Sure. What about non primitives? | | |
| ▲ | pjmlp 2 days ago | parent [-] | | CLU, Cedar, Modula-2+, Modula-3, Oberon, Oberon-2, Oberon-07, Active Oberon, Component Pascal, Eiffel, Sather, BETA, D, Nim, Swift, C#, F# are all examples of GC based languages with value types that can be used without any kind of boxing. | | |
| ▲ | rixed 2 days ago | parent [-] | | Impressive erudition and interresting list of nice languages, some of which I'd never heard about, thank you.
Yes indeed not all GCed languages suffer from mandatory boxing. I've been both picky and wrong, which is not a nice place to find oneself in :) |
|
|
|
|
|
| |
| ▲ | mattwilsonn888 3 days ago | parent | prev [-] | | *Under the assumption that you are maximizing both. I often hear complaints that Rust's semantics actually haven't maximized ergonomics, even factoring in the added difficulty it faces in pursuit of safety. It's totally possible languages as ergonomic as Rust can be more safe, just because Rust isn't perfect or even has some notable, partially subjective, design flaws. | | |
| ▲ | aw1621107 3 days ago | parent [-] | | > Under the assumption that you are maximizing both. I'm not sure that changes anything about my comment? GC'd languages can give you safety and* ergonomics, no need to trade off one for the other. Obviously doing so requires tradeoffs of their own, but such additional criteria were not mentioned in the comment I originally replied to. > I often hear complaints that Rust's semantics actually haven't maximized ergonomics, even factoring in the added difficulty it faces in pursuit of safety. Well yes, that's factually true. I don't think anyone can disagree that there aren't places where Rust can further improve ergonomics (e.g., partial borrows). And that's not even taking into account places where Rust intentionally made things less ergonomic (around raw pointers IIRC, though I think there's some discussion about changing that). > It's totally possible languages as ergonomic as Rust can be more safe It's definitely possible (see above about GC'd languages). There are just other tradeoffs that need to be made that aren't on the ergonomics <-> safety axis. | | |
| ▲ | mattwilsonn888 3 days ago | parent [-] | | Your earlier point that languages exist that are safer than Rust but not less ergonomic is irrelevant - that's the point I made. One can fail, or artificially make a language less ergonomic and that doesn't mean that fixing that somehow has an effect on the safety tradeoff. So obviously it is when safety and ergonomics are each already maximized that pushing one or the other results in a tradeoff. It's like saying removing weight from a car isn't a tradeoff because the weight was bricks in the trunk. Anyways I was holding performance constant in all of this because the underlying assumption of Rust and Zig and Odin and C is that performance will make no sacrifices. | | |
| ▲ | aw1621107 3 days ago | parent [-] | | > that's the point I made. That's not the way I read your original comment. When you said "it's a universal tradeoff, that is: Safety is less ergonomic", to me the implication is that gaining safety must result in losing ergonomics and vice versa. The existence of languages that are as safe/safer than Rust and more ergonomic than Rust would seem to be a natural counterexample since they have gained safety over Zig/C/C++ and haven't (necessarily, depending on the exact language) sacrificed ergonomics to do so. > One can fail, or artificially make a language less ergonomic and that doesn't mean that fixing that somehow has an effect on the safety tradeoff. To be honest that case didn't even cross my mind when I wrote my original comment. I was assuming we were working at the appropriate Pareto frontier. > So obviously it is when safety and ergonomics are each already maximized that pushing one or the other results in a tradeoff. Assuming no other relevant axes are available, sure. > Anyways I was holding performance constant in all of this because the underlying assumption of Rust and Zig and Odin and C is that performance will make no sacrifices. Sure. Might have been nice to include that assumption in your original comment, but even then I'm not sure it's too wise to ignore the performance axis completely due to the existence of safe implementations of otherwise "unsafe" languages (e.g., Zig's ReleaseSafe, GCs for C like Fil-C, etc.) that trade off performance for safety instead of ergonomics. |
|
|
|
| |
| ▲ | pjmlp 3 days ago | parent | prev | next [-] | | As I mention elsewhere, C crowd didn't call languages like Pascal and Modula-2 programming with straightjacket for no reason. Turns out not wearing that helmet, and continously falling down for 40 years at the skate park has its price. | |
| ▲ | junon 2 days ago | parent | prev | next [-] | | Writing rust fulltime for my personal projects, I have to disagree that Rust isn't ergonomic. In fact, I find it more ergonomic than any other language I ever work with. I'm consistently more productive with it than even scripting languages. Getting tired of this quip being asserted as fact. Ergonomics are subjective; memory safety is not. | | |
| ▲ | player1234 2 days ago | parent [-] | | How did you measure this productivity boost? Please share your results and methodology. | | |
| ▲ | junon 2 days ago | parent [-] | | How did you measure the assertion that Rust is not ergonomic? Please share your results and methodology. |
|
| |
| ▲ | freeopinion 3 days ago | parent | prev | next [-] | | Arguing that one language is more ergonomic but can produce the same safety if you use it unergonomically is... not very useful in a context where safety is highly valued. | |
| ▲ | bionhoward 3 days ago | parent | prev | next [-] | | Rust feels easy for me, could it be we’re just used to what we use more? Anyway, it’s all pretty easy, what’s the use arguing which of multiple easy things is easiest? | |
| ▲ | spoiler 2 days ago | parent | prev | next [-] | | I find Rust more ergonomic than something like C or C++, or even Go. I feel unburdened while using Rust, which is not something I can say about a lot of other dev environments. As for Zig... I tried to get into it, and I can't remember the specifics, but they felt like "poor" taste in language design (I have a similar opinion of Go). I say taste because I think some thighs weren't necessarily bad, but I just couldn't convince myself to like them. I realise this is a very minority opinion, and I know great engineers who love Zig. Zig's just not my thing I guess. Same way Rust isn't someone else's thing. | |
| ▲ | charcircuit 3 days ago | parent | prev | next [-] | | >Safety is less ergonomic. It's not safety that makes it less ergonomic, it's correctness. | | |
| ▲ | mattwilsonn888 3 days ago | parent [-] | | Implying that correctness necessitates a lack of ergonomics is deeply flawed. The distinction between correctness and safety is that safety is willing to suffer false positives, in pursuit of correctness. Correctness is just correctness. | | |
| ▲ | charcircuit 3 days ago | parent [-] | | It's not flawed. Ergonomics are correlated with complexity. If you can remove edge cases by giving up on correctness you can remove complexity. |
|
| |
| ▲ | timeon 3 days ago | parent | prev | next [-] | | How are chairs related to programming languages? Analogies are just made-up arguments. | | |
| ▲ | mattwilsonn888 3 days ago | parent [-] | | Because a primary function of the chair is the same to programming languages: ergonomics. That is obviously true, otherwise we'd code in assembly and type in UTF-8 byte codes. |
| |
| ▲ | SoraNoTenshi 3 days ago | parent | prev | next [-] | | It also drives me insane when i dump the problems i have with Rust about this exact issue, that i usually have to restructure my code to satisfy the compilers needs, and they come at me with the "Skill Issue" club... I honestly don't even know what to respond to that, but it's kind of weird to me to honestly think that you'd need essentially a "PhD" in order to use a tool... | | |
| ▲ | mattwilsonn888 3 days ago | parent | next [-] | | It's an amazing piece of marketing to corner anyone who dislikes a certain hassle as being mentally deficient - that's what "skill issue" means in this context. | | |
| ▲ | shakow 3 days ago | parent [-] | | > that's what "skill issue" means in this context. “Skill issue” definitely does not means “mentally deficient”. It comes from the videogames world, where it is used to disparage the lack of training/natural ability of other players; frequently accompanied by “get good”, i.e. continue training & grinding to up your skill. | | |
| ▲ | mattwilsonn888 2 days ago | parent [-] | | Do I have to put the pieces together for you? What is the relevant skill in programming? Problem solving. It's not aim or timing or hand-eye coordination lmao. | | |
| ▲ | vrmiguel 2 days ago | parent [-] | | Not sure I get your point. One can definitely "get good" at problem solving. Isn't that the whole purpose of Leetcode and whatnot? I struggled with Rust at first but now it feels quite natural, and is the language I use at work and for my open-source work. I was not "mentally deficient" when I struggled with Rust (at least that I know of :v), while you could say I had a skill issue with the language |
|
|
| |
| ▲ | Ar-Curunir 3 days ago | parent | prev [-] | | Not saying that Rust is necessarily easy to pick up, but hundreds of thousands of people use Rust without a PhD in any subject. | | |
| ▲ | mattwilsonn888 3 days ago | parent | next [-] | | Try and appreciate the humor in what you're replying to without fully discounting the point of it. | |
| ▲ | SoraNoTenshi 2 days ago | parent | prev | next [-] | | It is of course an exaggeration, but that's what is somewhat annoying to me.
But dismissing this point by just saying me "get better" after literally years of using Rust is a bit of a weak point, or am i in the wrong here? And it's not even that i dislike the language, but this is evangelism to just dismiss the point of my argument with "skill issue". A tool isn't supposed to be difficult, it should help you in whatever you're trying to achieve, not making it more difficult. | |
| ▲ | 3 days ago | parent | prev [-] | | [deleted] |
|
| |
| ▲ | dayvster 3 days ago | parent | prev [-] | | > The Rust community should be upfront about this tradeoff - it's a universal tradeoff, that is: Safety is less ergonomic. It's true when you ride a skateboard with a helmet on, it's true when you program, it's true for sex. Well put! And this should not be contentious issue, it simply is annoying to deal with Rust's very strict compiler. It's not a matter of opinion it simply is more annoying than if you were to use any other language that does not put that much burden on you the developer. Not all memory safety bugs are critical issues either. We like to pretend like they are but specifically in `coreutils` there were 2 memory safety bugs found recently. However is it really a big concern? if someone has gotten access to your system where they can run `coreutil` commands you probably have bigger problems than them running a couple of commands that leak. | | |
| ▲ | aw1621107 3 days ago | parent [-] | | > if someone has gotten access to your system where they can run `coreutil` commands you probably have bigger problems than them running a couple of commands that leak. Speaking more abstractly since I haven't looked at the CVEs in question, but an attacker directly accessing coreutils on your system isn't the only possible attack vector. Another potentially interesting one would be them expecting you to run coreutils on something under they control. For example, a hypothetical exploit in grep might be exploitable by getting you to grep through a repo with a malicious file. A more concrete example would be some of the various zero-click iMessage exploits, where the vulnerable software isn't exploited via the attackers directly accessing the victim's device but is exploited by sending a malicious file. |
|
|
|
| ▲ | tialaramex 3 days ago | parent | prev | next [-] |
| > Seasoned Rust coders don’t spend time fighting the borrow checker I like the fact that "fighting the borrow checker" is an idea from the period when the borrowck only understood purely lexical lifetimes. So you have to fight to explain why the thing you wrote, which is obviously correct, is in fact correct. That's already ancient history by the time I learned Rust in 2021. But, this idea that Rust will mean "fighting the borrow checker" took off anyway even though the actual thing it's about was solved. Now for many people it really is a significant adjustment to learn Rust if your background is exclusively say, Python, or C, or Javascript. For me it came very naturally and most people will not have that experience. But even if you're a C programmer who has never had most of this [gestures expansively] before you likely are not often "fighting the borrow checker". That diagnostic saying you can't make a pointer via a spurious mutable reference? Not the borrow checker. The warning about failing to use the result of a function? Not the borrow checker. Now, "In Rust I had to read all the diagnostics to make my software compile" does sound less heroic than "battling with the borrow checker" but if that's really the situation maybe we need to come up with a braver way to express this. |
| |
| ▲ | Austizzle 3 days ago | parent | next [-] | | I think the phrase _emotionally_ resonates with people who write code that would work in other languages, but the compiler rejects. When I was learning rust (coming from python/java) it certainly felt like a battle because I "knew" the code was logically sound (at least in other languages) but it felt like I had to do all sorts of magic tricks to get it to compile. Since then I've adapted and understand better _why_ the compiler has those rules, but in the beginning it definitely felt like a fight and that the code _should_ work. | |
| ▲ | milch 2 days ago | parent | prev | next [-] | | I've heard lots of people complain during the "exploratory" phase of writing code - maybe you haven't fully figured out how to write the code yet, you're iteratively making changes and restructuring as you go. Most languages make this easy, but with Rust e.g. if you add a lifetime to a reference in a struct it usually becomes a major refactor which can be frustrating to deal with when you're not even sure yet if your approach will work. More experienced devs would probably just use Rc or similar in that case to avoid the lifetime, and then maybe go back and refactor later once the code is "solidified", but for newer devs - they add the &, see the compiler error telling them it's missing a lifetime annotation, spend 30min refactoring, and that's how these stereotypes get reinforced | |
| ▲ | zorked 2 days ago | parent | prev [-] | | Ah, thanks for the historical take. I learned Rust recently. I like it. I never fought the borrow checker. I was sometimes happily protected by the borrow checker. I never understood what people were talking about. | | |
| ▲ | tialaramex 2 days ago | parent [-] | | Right, in early Rust, years ago, it's not even legal to do this: fn main() {
let mut x = 5;
let y = &x;
let z = &mut x;
}
The original borrowck goes oh no, y is a reference to x and then z is a mutable reference to x, those can't both exist at the same time, but the scope of y hasn't ended so I give up here's an error message. You needed to adjust your software so that it can see why what you wrote is fine. That's "fighting the borrow checker".But today's borrowck just goes duh, the reference y goes away right before the variable z is created and everything is cool. These are called "Non-lexical lifetimes" because the lifetime is no longer strictly tied to a lexical scope - the curly braces in the program - but can have any necessary extent to make things correct. Further improving the ability of the borrowck to see that what you're doing is fine is an ongoing piece of work for Rust and always will be†, but NLL was the lowest hanging fruit, most of the software I write would need tweaks to account for a strict lexical lifetime and it'd be very annoying when I know I am correct. † Rice's theorem tells us we can either have a compiler where sometimes illegal borrows are allowed or a compiler where sometimes borrows that should be legal are forbidden (or both, which seems useless), but we cannot have one which is always right, so, Rust chooses the safe option and that means we're always going to be working to make it just a bit better. |
|
|
|
| ▲ | Galanwe 3 days ago | parent | prev | next [-] |
| > Seasoned Rust coders don’t spend time fighting the borrow checker My experience is that what makes your statement true, is that _seasoned_ Rust developers just sprinkle `Arc` all over the place, thus effectively switching to automatic garbage collection. Because 1) statically checked memory management is too restrictive for most kinds of non trivial data structures, and 2) the hoops of lifetimes you have to go to to please the static checker whenever you start doing anything non trivial are just above human comprehension level. |
| |
| ▲ | hu3 3 days ago | parent | next [-] | | I did some quick search, not sure if this supports or denies your point: - 151 instances of "Arc<" in Servo: https://github.com/search?q=repo%3Aservo%2Fservo+Arc%3C&type... - 5 instances of "Arc<" in AWS SDK for Rust https://github.com/search?q=repo%3Arusoto%2Frusoto%20Arc%3C&... - 0 instances for "Arc<" in LOC https://github.com/search?q=repo%3Acgag%2Floc%20Arc%3C&type=... | | | |
| ▲ | andrewl-hn 3 days ago | parent | prev | next [-] | | `Arc`s show up all over the place specifically in async code that targets Tokio runtime running in multithreaded mode. Mostly this is because `tokio::spawn` requires `Future`s to be `Send + 'static`, and this function is a building block of most libraries and frameworks built on top of Tokio. If you use Rust for web server backend code then yes, you see `Arc`s everywhere. Otherwise their use is pretty rare, even in large projects. Rust is somewhat unique in that regard, because most Rust code that is written is not really a web backend code. | | |
| ▲ | khuey 3 days ago | parent [-] | | > `Arc`s show up all over the place specifically in async code that targets Tokio runtime running in multithreaded mode. Mostly this is because `tokio::spawn` requires `Future`s to be `Send + 'static`, and this function is a building block of most libraries and frameworks built on top of Tokio. To some extent this is unavoidable. Non-'static lifetimes correspond (roughly) to a location on the program stack. Since a Future that suspends can't reasonably stay on the stack it can't have a lifetime other than 'static. Once it has to be 'static, it can't borrow anything (that's not itself 'static), so you either have to Copy your data or Rc/Arc it. This, btw, is why even tokio's spawn_local has a 'static bound on the Future. It would be nice if it were ergonomic for library authors to push the decision about whether to use Rc<RefCell<T>> or Arc<Mutex<T>> (which are non-threadsafe and threadsafe variants of the same underlying concept) to the library consumer. | | |
| |
| ▲ | qw3rty01 3 days ago | parent | prev | next [-] | | This is exactly the opposite of what he’s saying, using Arc everywhere is hacking around the borrow checker, a seasoned rust developer will structure their code in a way that works with the borrow checker; Arc has a very specific use case and a seasoned rust developer will rarely use it | | |
| ▲ | Aurornis 3 days ago | parent | next [-] | | These extreme generalizations are not accurate, in my experience. There are some cases where someone new to Rust will try to use Arc as a solution to every problem, but I haven't seen much code like this outside of reviewing very junior Rust developers' code. In some application architectures Arc is a common feature and it's fine. Saying that seasoned Rust developers rarely use Arc isn't true, because some types of code require shared references with Arc. There is nothing wrong with Arc when used properly. I think this is less confusing to people who came from modern C++ and understand how modern C++ features like shared_ptr work and when to use them. For people coming from garbage collected languages it's more tempting to reach for the Arc types to try to write code as if it was garbage collected. | |
| ▲ | packetlost 3 days ago | parent | prev | next [-] | | Arc<T> is all over the place if you're writing async code unfortunately. IMO Tokio using a work-stealing threaded scheduler by default and peppering literally everything with Send + Sync constraints was a huge misstep. | | |
| ▲ | ekidd 3 days ago | parent | next [-] | | I mostly wind up using Arc a lot while using async streams. This tends to occur when emulating a Unix-pipeline-like architecture that also supports concurrency. Basically, "pipelines where we can process up to N items in parallel." But in this case, the data hiding behind the Arc is almost never mutable. It's typically some shared, read-only information that needs to live until all the concurrent workers are done using it. So this is very easy to reason about: Stick a single chunk of read-only data behind the reference count, and let it get reclaimed when the final worker disappears. | |
| ▲ | vlovich123 3 days ago | parent | prev [-] | | Arc + work stealing scheduler is common. But work stealing schedulers are common (eg libdispatch popularized it). I believe the only alternative is thread-per core but they’re not very common/popular. For what it’s worth zig would look very similar except their novel injectable I/O syntax isn’t compatible with work stealing. Even then, I’d agree that while Arc is used in lots of places in work stealing runtimes, I disagree that it’s used everywhere or that you can really do anything else if you want to leverage all your cores with minimum effort and not having to build your application specialized to deal with that. | | |
| ▲ | packetlost 3 days ago | parent [-] | | Being possible with minimal effort doesn't really preclude it from it not being the default. The issue I have is huge portions of Tokio's (and other async libs) API have a Send + Sync constraint that destroy the benefit of LocalSet / spawn_local. You can't build and application with the specialized thread-per core or single-threaded runtime thing if you wanted to because of pervasive incidental complexity. I don't care that they have a good work-stealing event loop, I care that it's the default and their APIs all expect the work-stealing implementation and unnecessarily constrain cases where you don't use that implementation. It's frustrating and I go out of my way to avoid Tokio because of it. Edit: the issues are in Axum, not the core Tokio API. Other libs have this problem too due to aforementioned defaults. | | |
| ▲ | Arnavion 3 days ago | parent | next [-] | | >You can't build and application with the specialized thread-per core or single-threaded runtime thing if you wanted to because of pervasive incidental complexity. [...] It's frustrating and I go out of my way to avoid Tokio because of it. At $dayjob we have built a large codebase (high-throughput message broker) using the thread-per-core model with tokio (ie one worker thread per CPU, pinned to that CPU, driving a single-threaded tokio Runtime) and have not had any problems. Much of our async code is !Send or !Sync (Rc, RefCell, etc) precisely because we want it to benefit from not needing to run under the default tokio multi-threaded runtime. We don't use many external libs for async though, which is what seems to be the source of your problems. Mostly just tokio and futures-* crates. | | |
| ▲ | packetlost 3 days ago | parent [-] | | I might be misremembering and the overbearing constraints might be in Axum (which is still a Tokio project). External libs are a huge problem in this area in general, yeah. |
| |
| ▲ | vlovich123 3 days ago | parent | prev [-] | | Single-threaded runtime doesn't require Send+Sync for spawned futures. AFAIK Tokio doesn't have a thread-per-core backend and as a sibling intimated you could build it yourself (or use something more suited for thread-per-core like Monoio or Glommio). |
|
|
| |
| ▲ | jasonjmcghee 3 days ago | parent | prev | next [-] | | This is awkward. I've written a fair amount of rust. I reach for Arc frequently. I see the memory layout implications now. Do you tend to use a lot of Arenas? | | |
| ▲ | dminik 3 days ago | parent [-] | | I've not explored every program domain, but in general I see two kinds of program memory access patterns. The first is a fairly generic input -> transform -> output. This is your generic request handler for instance. You receive a payload, run some transform on that (and maybe a DB request) and then produce a response. In this model, Arc is very fitting for some shared (im)mutable state. Like DB connections, configuration and so on. The second pattern is something like: state + input -> transform -> new state. Eg. you're mutating your app state based on some input. This fits stuff like games, but also retained UIs, programming language interpreters and so on on. Using ARCs here muddles the ownership. The gamedev ecosystem has found a way to manage this by employing ECS, and while it can be overkill, the base DOD principles can still be very helpful. Treat your data as what it is; data. Use indices/keys instead of pointers to represent relations. Keep it simple. Arenas can definitely be a part of that solution. |
| |
| ▲ | bilekas 3 days ago | parent | prev [-] | | This is something I have noticed while I'm by no means seasoned enough to consider myself even a mid level, some of my colleagues are and what they tend to do it plan ahead much better or pedantically, as they put it, the worst thing you will end up doing it's trying to change an architectural decision later on. |
| |
| ▲ | Aurornis 3 days ago | parent | prev | next [-] | | > thus effectively switching to automatic garbage collection Arc isn't really garbage collection. It's like a reference counted smart pointer like C++ has shared_ptr. If you drop an Arc and it's the last reference to the underlying object, it gets dropped deterministically. Garbage collection generally refers to more complex systems that periodically identify and free unused objects in a less deterministic manner. | | |
| ▲ | ninkendo 3 days ago | parent | next [-] | | Also importantly, an Arc<T> can be passed to anything expecting a &T, so you’re not necessarily bumping refcounts all over the place when using an Arc. If you only store it in one place, it’s basically equivalent to any other boxed pointer. | |
| ▲ | Arnavion 3 days ago | parent | prev | next [-] | | >Garbage collection generally refers to more complex systems that periodically identify and free unused objects in a less deterministic manner. No, this is a subset of garbage collection called tracing garbage collection. "Garbage collection" absolutely includes refcounting. | | |
| ▲ | simonask 3 days ago | parent [-] | | There’s just no good reason to conflate the two. Rust’s Arc and C++’s std::shared_ptr do not reclaim reference cycles, so you can call it “garbage collection” if you want, but the colloquial understanding is way more useful. | | |
| |
| ▲ | hansvm 3 days ago | parent | prev | next [-] | | That's fair. It's not really a good pattern though. You get all the runtime overhead of object-soup allocation patterns, syntactic noise making it harder to read than even a primitive GC language (including one using ARC by default and implementing deterministic dropping, a pattern most languages grow out of), and the ability to easily leak [0] memory because it's not a fully garbage-collected solution. As a rough approximation, if you're very heavy-handed with ARC then you probably shouldn't be using rust for that project. [0] The term "leak" can be a bit hard to pin down, but here I mean something like space which is allocated and which an ordinary developer would prefer to not have allocated. | | |
| ▲ | Aurornis 3 days ago | parent [-] | | I agree that using an Arc where it's unnecessary is not good form. However, I disagree with generalizations that you can judge the quality of code based on whether or not it uses a lot of Arc. You need to understand the architecture and what's being accomplished. | | |
| ▲ | hansvm 3 days ago | parent [-] | | > disagree with generalizations that you can judge the quality of code based on whether or not it uses a lot of Arc That wasn't really my point, but I disagree with your disagreement anyway ;) Yes, you don't want to over-generalize, but Arc has a lot of downsides, doesn't have a lot of upsides, and can usually be relatively easily avoided in lieu of something with a better set of tradeoffs. Heavy use isn't bad in its own right, but it's a strong signal suggestive of code needing some love and attention. My point though was: If you are going to heavily use Arc, Rust isn't the most ergonomic language for the task, and where for other memory management techniques the value proposition of Rust is more apparent it's a much narrower gap compared to those ergonomic choices if you use Arc a lot. Maybe you have to (or want to) use Rust anyway for some reason, but it's usually a bad choice conditioned on that coding style. |
|
| |
| ▲ | bluGill 3 days ago | parent | prev | next [-] | | Reference counting has always been a way to garbage collect. Those who like garbage collection have always looked down on it because it cannot handle circular references and is typically slower than the mark and sweep garbage collectors they prefer. If you need a referecne counted garbage collector for more than a tiny minotiry of your code, then Rust was probably the wrong choice of language - use something that has a better (mark and sweep) garbage collectors. Rust is good for places where you can almost always find a single owner, and you can use reference counting for the rare exception. | | |
| ▲ | Aurornis 3 days ago | parent [-] | | Reference counting can be used as an input to the garbage collector. However, the difference between Arc and a Garbage Collector is that the Arc does the cleanup at a deterministic point (when the last Arc is dropped) whereas a Garbage Collector is a separate thing that comes along and collects garbage later. > If you need a referecne counted garbage collector for more than a tiny minotiry of your code The purpose of Arc isn't to have a garbage collector. It's to provide shared ownership. There is no reason to avoid Rust if you have an architecture that requires shared ownership of something. These reductionist generalizations are not accurate. I think a lot of new Rust developers are taught that Arc shouldn't be abused, but they internalize it as "Arc is bad and must be avoided", which isn't true. | | |
| ▲ | bluGill 2 days ago | parent [-] | | > whereas a Garbage Collector is a separate thing that comes along and collects garbage later. That is the most common implementation, but that is still just an implementation detail. Garbage collectors can run deterministically which is what reference counting does. > There is no reason to avoid Rust if you have an architecture that requires shared ownership of something. Rust can be used for anything. However the goals are still something good for system programming. Systems programming implies some compromises which makes Rust not as good a choice for other types of programming. Nothing wrong with using it anyway (and often you have a mix and the overhead of multiple languages makes it worth using one even when another would be better for a small part of the problem) > I think a lot of new Rust developers are taught that Arc shouldn't be abused, but they internalize it as "Arc is bad and must be avoided", which isn't true. Arc has a place. However most places where you use it a little design work could eliminate the need. If you don't understand what I'm talking about then "Arch is bad and must be avoided" is better than putting Arc everywhere even though that would work and is less effort in the short run (and for non-systems programming it might even be a good design) |
|
| |
| ▲ | nayuki 3 days ago | parent | prev | next [-] | | > Arc isn't really garbage collection. It's like a reference counted smart pointer Reference counting is a valid form of garbage collection. It is arguably the simplest form. https://en.wikipedia.org/wiki/Garbage_collection_(computer_s... The other forms of GC are tracing followed by either sweeping or copying. > If you drop an Arc and it's the last reference to the underlying object, it gets dropped deterministically. Unless you have cycles, in which case the objects are not dropped. And then scanning for cyclic objects almost certainly takes place at a non-deterministic time, or never at all (and the memory is just leaked). > Garbage collection generally refers to more complex systems that periodically identify and free unused objects in a less deterministic manner. No. That's like saying "a car is a car; a vehicle is anything other than a car". No, GC encompasses reference counting, and GC can be deterministic or non-deterministic (asynchronous). | |
| ▲ | jcelerier 3 days ago | parent | prev | next [-] | | > Arc isn't really garbage collection. It's like a reference counted smart pointer like C++ has shared_ptr. In c++ land this is very often called garbage collection too | |
| ▲ | jandrewrogers 3 days ago | parent | prev [-] | | This still raises the question of why Arc is purportedly used so heavily. I've written 100s of kLoC of modern systems C++ and never needed std::shared_ptr. | | |
| ▲ | pjmlp 3 days ago | parent [-] | | For the same reason Unreal uses one. Large scale teams always get pointer ownership wrong. Project Zero has enough examples. |
|
| |
| ▲ | steveklabnik 3 days ago | parent | prev | next [-] | | The only time I use Arc is wrapping contexts for web handlers. That doesn’t mean there aren’t other legitimate use cases, but “all the time” is not representative of the code I read or write, personally. | |
| ▲ | kibwen 3 days ago | parent | prev | next [-] | | > _seasoned_ Rust developers just sprinkle `Arc` all over the place No, this couldn't be further from the truth. | | |
| ▲ | 9rx 3 days ago | parent [-] | | If they aren't sprinkling `Arc` all over, what are they seasoning with instead? | | |
| |
| ▲ | swiftcoder 3 days ago | parent | prev | next [-] | | I don't think there are any Arcs in my codebase (apart from a couple of regrettable ones needed to interface with Javascript callbacks in WASM - this is more a WASM problem than a rust problem). | | |
| ▲ | ChadNauseam 3 days ago | parent [-] | | haha, I was about to leave the exact same comment. how are you finding wasm? I’ve been feeling like rust+react is my new favorite tech stack | | |
| ▲ | swiftcoder 2 days ago | parent [-] | | I love it, but I'm mainly using it for webgl/webgpu stuff, so relatively little interaction with the DOM - I feel like DOM interaction is still kind of painful through rust/wasm |
|
| |
| ▲ | amw-zero 3 days ago | parent | prev | next [-] | | How often are you writing non-trivial data structures? | |
| ▲ | kannanvijayan 3 days ago | parent | prev | next [-] | | Not sure how seasoned I am, but I reject any comparison to a cooking utensil! I do find myself running into lifetime and borrow-checker issues much less these days when writing larger programs in rust. And while your comment is a bit cheeky, I think it gets at something real. One of the implicit design mentalities that develops once you write rust for a while is a good understanding of where to apply the `UnsafeCell`-related types, which includes `Arc` but also `Rc` and `RefCell` and `Cell`. These all relate to inner mutability, and there are many situations where plopping in the right one of these effectively resolves some design requirement. The other idiomatic thing that happens is that you implicitly begin structuring your abstract data layouts in terms of thunks of raw structured data and connections between them. This usually involves an indirection - i.e. you index into an array of things instead of holding a pointer to the thing. Lastly, where lifetimes do get involved, you tend to have a prior idea of what thing they annotate. The example in the article is a good case study of that. The author is parsing a `.notes` file and building some index of it. The text of the `.notes` file is the obvious lifetime anchor here. You would write your indexing logic with one lifetime 'src: `fn build_index<'src>(src: &'src str)` Internally to the indexing code, references to 'src-annotated things can generally pass around freely as their lifetime converges after it. Externally to the indexing code you'd build a string of the notes text, and passing a reference to that to the `build_index` function. For simple CLI programs, you tend not to really need anything more than this. It gets more hairy if you're looking at constructing complex object graphs with complex intermediate state, partial construction of sub-states, etc. Keeping track of state that's valid at some level, while temporarily broken at another level, is where it gets really annoying with multiple nested lifetimes and careful annotation required. But it was definitely a bit of a hair-pulling journey to get to my state of quasi-peace with Rust's borrow checker. | |
| ▲ | ViewTrick1002 2 days ago | parent | prev | next [-] | | > My experience is that what makes your statement true, is that _seasoned_ Rust developers just sprinkle `Arc` all over the place, thus effectively switching to automatic garbage collection. How else would you safely share data in multi-threaded code? Which is the only reason to use Atomic reference counts. | |
| ▲ | levkk 3 days ago | parent | prev | next [-] | | Definitely not. Arc is for immutable (or sync, e.g. atomics, mutexes) data, while borrow checker protects against concurrent mutations. I think you meant Arc<Mutex<T>> everywhere, but that code smells immediately and seasoned Rust devs don't do that. | |
| ▲ | dev_l1x_be 3 days ago | parent | prev | next [-] | | I am not sure this is true. Maybe with shared async access it is. I rarely use Arc. | |
| ▲ | bryanlarsen 3 days ago | parent | prev [-] | | Or more likely, sprinkle .clone() liberally and Arc or an Arc wrapper (ArcSwap, tokio's watch channels, etc) strategically. |
|
|
| ▲ | dayvster 3 days ago | parent | prev | next [-] |
| > Seasoned Rust coders don’t spend time fighting the borrow checker No true scotsman would ever be confused by the borrow checker. i've seen plenty of rust projects open source and otherwise that utilise Arc heavily or use clone and/or copy all over the place. |
| |
| ▲ | vrmiguel 2 days ago | parent | next [-] | | Why would using Arc mean that someone is fighting the borrow checker, or confused by it? Would you also say the same for a C++ project that uses shared_ptrs everywhere? The clone quip doesn't work super well when comparing to C++ since that language "clones" data implicitly all the time | |
| ▲ | fkyoureadthedoc 3 days ago | parent | prev | next [-] | | I'm starting to think No True HNer goes without misidentifying a No True Scotsman fallacy. They are clearly just saying as you become more proficient with X, Y is less of a problem. Not that if the borrow checker is blocking you that you aren't a real Rust programmer. Let's say you're trying to get into running. You express that you can't breathe well during the exercise and it's a miserable experience. One of your friends tells you that as an experienced runner they don't encounter that in the same way anymore, and running is thus more enjoyable. Do you start screeching No True Scotsman!! at them? I think not. | |
| ▲ | Ygg2 3 days ago | parent | prev [-] | | > > Seasoned Rust coders don’t spend time fighting the borrow checker > No true scotsman would ever be confused by the borrow checker. I'd take that No true scotsman over the "Real C programmers write code without CVE" for $5000. Also you are strawmanning the argument. GP said, "As a seasoned veteran of Rust you learn to think like the borrow checkers." vs "Real Rust programmers were born with knowledge of borrow checker". |
|
|
| ▲ | melodyogonna 3 days ago | parent | prev | next [-] |
| I believe the benefit of Zig is that it allows you the familiarity of writing code like in C, but has other elements in the language and tooling to make things safer. For example, Zig has optionals, which can eliminate nil deference. Another example is how you can pass some debug or custom allocators during testing that have all sorts of runtime checks to detect bad memory access and resource leaks. I have some issues with Zig's design, especially around the lack of explicit interface/trait, but I agree with the post that it is a more practical language, just because of how much simpler its adoption is. |
|
| ▲ | ajross 3 days ago | parent | prev | next [-] |
| > Seasoned Rust coders don’t spend time fighting the borrow checker - their code is already written in a way that just works. That hasn't been my experience at all. At best, the first version of code pops out quickly and cleanly because the author knows the appropriate idiom to choose. Refactoring rust code to handle changes in that allocation idiom is extremely expensive, even for the most seasoned developers. Case in point: > Once you’ve been using Rust for a while, you don’t have to “restructure” your code to please the borrow checker, because you’ve already thought about “oh, these two variables need to be mutated concurrently, so I’ll store them separately”. Which fails to handle "these two variables didn't need to be mutated concurrently, but now they do". |
| |
| ▲ | simonask 3 days ago | parent | next [-] | | This is an interesting comment, because you point directly at the exact reason Rust’s approach is so productive. In the C/C++/Zig code, you would add the second concurrent access, and then start fixing things up and restructuring things - if you, the programmer, knew about the first access, and knew that the concurrent access is a problem. In countless cases, that work would not be done, and I cannot blame any of the involved people, because managing that kind of detailed complexity over the lifespan of a project is not humanly possible. The result is another concurrency bug, meaning UB in production. Having the compiler tell you about such problems up front, exactly when they happen, is a complete game changer. | | |
| ▲ | ajross 3 days ago | parent [-] | | > In the C/C++/Zig code, you would add the second concurrent access, and then start fixing things up and restructuring things Well, sure, which in practice means throwing a lock around it. I mean, I get it. There's a category of bugs that happen in real code. Rust chooses some categories of bugs (certainly not all of them) to construct walls around and forces code into idioms that can be provably correct for at least a subset[1] of the bug space. For the case of memory safety, that's a really pretty convincing case. Other areas are less clear; in particular I'm not a fan of Rust's idea of threadsafety and don't think it fits what actually performance-parallel code needs. [1] You can 100% write racy code with Sync/Send! You can write racy code with boring filesystem access using 100% safe rust (or 1980's csh code, whatever), too. Race conditions are inherent to concurrency. Sync/Send just protect memory access, they do nothing to address semantic/state bugs. | | |
| ▲ | simonask 2 days ago | parent [-] | | This is true out of the box - Rust isn't magical, and it can't fix all your bugs for you. But it does make its own tools available to you, an API designer. Lifetimes are available to you without ever making any actual references, and the Send/Sync traits are available to you without constructing any standard synchronization mechanism. You can construct something like `PhantomData<&mut ()>` to express invariants like "while this type exists, these other operations are unavailable". You can implement Send and/or Sync to say things like "under these specific conditions, this thing is thread safe". These are really powerful features of the type system, and no other mainstream language can really express such invariants at compile time. | | |
| ▲ | ajross 2 days ago | parent [-] | | Again though, I don't see a lot of value there. You seem to be implicitly repeating the prior that rust prevents race conditions, and it emphatically does not. It detects and prevents concurrent unlocked[1] access to a single memory location, which is not the same thing. Real races are much more complicated than that and happen in the semantic space of the application, not the memory details (again, think of email lockfiles as a classic race that has nothing to do with memory at all). [1] Though with overhead. It's tends not to be possible in safe rust to get it to generate code that looks like a pthread_mutex critical section. This again is one of my peeves, you do dangerous shared memory concurrency for performance! |
|
|
| |
| ▲ | mattwilsonn888 3 days ago | parent | prev [-] | | I would be interested to read the debates that stems from this point. |
|
|
| ▲ | dev_l1x_be 3 days ago | parent | prev | next [-] |
| I can't remember the last time I had any problem with the borrow checker. The junior solution is .clone(), better one is & (reference) and if you really need you can start to use <'a>. There is a mild annoyance with which function consumes what and the LLM era really helped with this. My beef is sometimes with the ways traits are implemented or how AWS implemented Errors for the their library that is just pure madness. |
| |
| ▲ | vlovich123 3 days ago | parent | next [-] | | > The junior solution is .clone() I really hope it’s an Rc/Arc that you’re cloning. Just deep cloning the value to get ownership is dangerous when you’re doing it blindly. | | |
| ▲ | dev_l1x_be 2 days ago | parent [-] | | Sure thing, this is why you need to learn what happens with copy and clone. Interestingly smaller problems are going to be ok even with clone(). The real value is to be able to spot potential performance optimizations by grep '(copy|clone)'. | | |
| ▲ | vlovich123 2 days ago | parent [-] | | How are you going to grep for copies? You know there’s no .copy method right? |
|
| |
| ▲ | seivan 3 days ago | parent | prev [-] | | How did AWS mess up errors? | | |
| ▲ | dev_l1x_be 3 days ago | parent [-] | | Maybe I am holding it wrong. Here is one piece of the problem: while let Some(page) = object_stream.next().await {
match page {
// ListObjectsV2Output
Ok(p) => {
if let Some(contents) = p.contents {
all_objects.extend(contents);
}
}
// SdkError<ListObjectsV2Error, Response>
Err(err) => {
let raw_response = &err.raw_response();
let service_error = &err.as_service_error();
error!("ListObjectsV2Error: {:?} {:?}", &service_error, &raw_response);
return Err(S3Error::Error(format!("ListObjectsV2Error: {:?}", err)));
}
}
}
| | |
| ▲ | estebank 3 days ago | parent [-] | | Out of curiosity, why are you borrowing that many times? The following should work: while let Some(page) = object_stream.next().await {
match page {
// ListObjectsV2Output
Ok(p) => {
if let Some(contents) = p.contents {
all_objects.extend(contents);
}
}
// SdkError<ListObjectsV2Error, Response>
Err(err) => {
let raw_response = err.raw_response();
let service_error = err.as_service_error();
error!("ListObjectsV2Error: {:?} {:?}", service_error, raw_response);
return Err(S3Error::Error(format!("ListObjectsV2Error: {:?}", err)));
}
}
}
I would have written it this way while let Some(page) = object_stream.next().await {
let p: ListObjectsV2Output = page.map_err(|err| {
// SdkError<ListObjectsV2Error, Response>
let raw_response = err.raw_response();
let service_error = err.as_service_error();
error!("ListObjectsV2Error: {service_error:?} {raw_response:?}");
S3Error::Error(format!("ListObjectsV2Error: {err:?}"))
})?;
if let Some(contents) = p.contents {
all_objects.extend(contents);
}
}
although if your crate defines `S3Error`, then I would prefer to write while let Some(page) = object_stream.next().await {
if let Some(contents) = page?.contents {
all_objects.extend(contents);
}
}
by implementing `From`: impl From<SdkError<ListObjectsV2Error, Response>> for S3Error {
fn from(err: SdkError<ListObjectsV2Error, Response>) -> S3Error {
let raw_response = err.raw_response();
let service_error = err.as_service_error();
error!("ListObjectsV2Error: {service_error:?} {raw_response:?}");
S3Error::Error(format!("ListObjectsV2Error: {err:?}"))
}
}
| | |
| ▲ | dev_l1x_be 2 days ago | parent [-] | | Excellent! Thank you! My problem is that I should have something like (http_status, reason) where http_status is a String or u16, reason is a enum with SomeError(String) structure. So essentially having a flat meaningful structure instead of this what we currently have. I do not have any mental model about the error structure of the AWS libs or don't even know where to start to create that mental model. As a result I just try to turn everything to a string and return it altogether hoping that the real issue is there somwhere in that structure. I think the AWS library error handling is way to complex for what it does and one way we could improve that if Rust had a great example of a binary (bin) project that has lets say 2 layers of functions and showing how to organize your error effectively. Now do this for a lib project. Without this you end up with this hot mess. At least this is how I see it. If you have a suggestion how should I return errors from a util.rs that has s3_list_objects() to my http handler than I would love to hear what you have to say. Thanks for your suggestions anyway! I am going to re-implement my error handling and see if it gives us more clarity with impl. | | |
| ▲ | estebank 2 days ago | parent [-] | | You might want to look at anyhow and thiserror, the former for applications and for libraries the latter. thiserror "just" makes it easier to do what I suggested of writing a manual `impl From` so that `?` can transform from the error you're getting to the error you want. When the API you consume is very granular, that's actually great because it means that you have a lot of control over the transformation (it's hard to add detail that isn't already there), but it can be annoying when you don't care about that granularity (like when you just want to emit an error during shutdown or log during recovery). https://momori.dev/posts/rust-error-handling-thiserror-anyho... burntsushi has a good writeup about their difference in usecase here: https://www.reddit.com/r/rust/comments/1cnhy7d/whats_the_wis... |
|
|
|
|
|
|
| ▲ | nicoburns 3 days ago | parent | prev | next [-] |
| 100% I came to Rust from a primarily JavaScript/TypeScript background, and most of the idioms and approaches I was used to using translated directly into Rust. |
| |
|
| ▲ | aiono 3 days ago | parent | prev | next [-] |
| A lot of criticism of Rust (not denying that there are also a lot useful criticisms of Rust out there) boils down to "it requires me to think/train in a different way than I used to, therefore it's hard" and goes on to how the other way is easier which is not the case but it's just familiar to them hence they think it's easier and simpler. More people should watch the talk "Simple made easy" https://www.youtube.com/watch?v=SxdOUGdseq4 |
|
| ▲ | tleyden5iwx 3 days ago | parent | prev | next [-] |
| I don't think like a C programmer, my problem is that I think like a Java/Python/Go programmer, and I'm spoiled by getting used to having a garbage collector always running behind me cleaning up my memory poops. Even though Rust can end up with some ugly/crazy code, I love it overall because I can feel pretty safe that I'm not going to create hard-to-find memory errors. Sure, I can (and do) write code that causes my (rust) app to crash, but so far they've all been super trivial errors to debug and fix. I haven't tried Zig yet though. Does it give me all the same compile time memory usage guarantees? |
| |
| ▲ | seemaze 3 days ago | parent [-] | | At first, the 12 year old inside me giggled at the thought of 'memory poops', but then I realized that a garbage collector is much more analogous to a waste water treatment plant than a garbage truck and a landfill.. |
|
|
| ▲ | amelius 3 days ago | parent | prev | next [-] |
| > Seasoned Rust coders don’t spend time fighting the borrow checker Yes, they know when to give up. |
| |
|
| ▲ | bsder 3 days ago | parent | prev | next [-] |
| > Seasoned Rust coders don’t spend time fighting the borrow checker - their code is already written in a way that just works. I really wish people would quit bleating on about the borrow checker. As someone who does systems programming, that's not the problem with Rust. Which Trait do I need to do X? Where is Trait Y and who has my destructor? How am I supposed to refactor this closure into a function? Sigh, I have to wrap yet another object as a newtype because of the Orphan Rule. Ah yes, an eight deep chain initialization calls because Rust won't do named/optional function arguments. Oh, great, the bug is inside a macro--well, there goes at least one full day. Ah, an entity component systems that treats indices like pointers but without the help of the compiler so I can index into the void and scribble over everything--but, hey, it's memory safe and won't segfault (erm, there is a reason why C programmers groan when they get a stack/heap smasher bug). |
|
| ▲ | seivan 3 days ago | parent | prev [-] |
| [dead] |