Remix.run Logo
photon_garden 4 days ago

Their code is more complex in some ways (for example, it’s verbose).

But in languages with exceptions, if you want to know how a function can fail, you have two options:

- Hope the documentation is correct (it isn’t)

- Read the body of the function and every function it calls

Reasonable people can disagree on the right approach here, but I know which I prefer.

jmux 4 days ago | parent | next [-]

> Hope the documentation is correct (it isn’t)

real

compared to every exception-based language I’ve used, rust error handling is a dream. my one complaint is async, but tbh I don’t think exceptions would fare much better since things like the actor model just don’t really support error propagation in any meaningful way

vbezhenar 4 days ago | parent | prev | next [-]

Every function can fail with StackOverflowError and you can't do anything about it.

Almost every function can fail with OutOfMemoryError and you can't do anything about it.

I've accepted that everything can fail. Just write code and expect it to throw. Write programs and expect them to abort.

I don't understand this obsession with error values. I remember when C++ designers claimed, that exceptions provide faster code execution for happy path, so even for systems language they should be preferred. Go error handling is bad. Rust error handling is bad. C error handling is bad, but at least that's understandable.

frumplestlatz 4 days ago | parent | next [-]

This is silly. We can avoid stack overflow by avoiding unbounded recursion.

In user-space, memory overcommit means that we will almost or literally never see an out of memory error.

In kernel space and other constrained environments, we can simply check for allocation, failure and handle it accordingly.

This is a terrible argument for treating all code as potentially failing with any possible error condition.

vbezhenar 3 days ago | parent [-]

> We can avoid stack overflow by avoiding unbounded recursion.

Only if you control the entire ecosystem, from application to every library you're ever going to use. And will not work for library code anyway.

> In user-space, memory overcommit means that we will almost or literally never see an out of memory error.

Have you ever deployed an application to production? Where it runs in the container, with memory limits. Or just good old ulimit. Or any language with VM or GC which provides explicit knobs to limit heap size (and these knobs are actually used by deployers).

> This is a terrible argument for treating all code as potentially failing with any possible error condition.

This is reality. Some programs can ignore it, hand-waving possibilities out, expecting that it won't happen and it's not your problem. That's one approach, I guess.

frumplestlatz 3 days ago | parent [-]

> Only if you control the entire ecosystem, from application to every library you're ever going to use. And will not work for library code anyway.

Two edge cases existing is a terrible argument for creating a world in which any possible edge case must also be accounted for.

>> This is a terrible argument for treating all code as potentially failing with any possible error condition.

> This is reality. Some programs can ignore it, hand-waving possibilities out, expecting that it won't happen and it's not your problem. That's one approach, I guess.

No, it’s not reality. It’s the mess you create when you work in languages with untyped exception handling and with people that insist on writing code the way you suggest.

whatevaa 3 days ago | parent | prev | next [-]

Rust and GO has panics for these. Most of the time there is nothing application can do by itself, either there is a bug in application or there is actually shortage of memory and only the OS can do anything about it.

I'm not talking about embedded or kernels. Different stories.

malkia 4 days ago | parent | prev | next [-]

^^^ - This - my recent one, came to the realization that dealing with memory mapped files is much harder without exceptions (not that exceptions make it easier, but at least possible).

Why? Let's say you've opened a memory mapped file, you've got pointer, and hand this pointer down to some library - "Here work there" - the library thinks - oh, it's normal memory - fine! And then - physical block error happens (whether it's Windows, OSX, Linux, etc.) - and now you have to handle this from... a rather large distance - where "error code" handling is not enough - and you have to use signal handling with SIGxxx or Windows SEH handling, or whatever the OS provides

And then you have languages like GoLang/Rust/others where this is a pain point (yes you can handle it), but how well?

If you look in ReactOS the code is full with `__try/__except` - https://github.com/search?q=repo%3Areactos%2Freactos+_SEH2_T... - because user provided memory HAVE to be checked - you don't want exception happening at the kernel reading bad user memory.

So it's all good and fine, until you have to face this problem... Or decide to not use mmap files (is this even possible?).

Okay, I know it's just a silly little thing I'm pointing here - but I don't know of any good solution off hand...

And even handling this in C/C++ with all SEH capabilities - it still sucks...

vlovich123 4 days ago | parent [-]

If the drive fails and you get a signal it’s perfectly valid to just let the default signal handler crash your process. Signals by definition are delivered non-locally, asynchronously, and there’s generally nothing to try/catch or recover. So handling this in Rust is no different than any other language because these kinds of failures never result in locally handleable errors.

malkia 2 days ago | parent [-]

That's not true - you can handle this pretty well with exceptions (yes it's nagging that you have to add them, but doable)... Not so much without.

tialaramex 4 days ago | parent | prev [-]

> Every function can fail with StackOverflowError and you can't do anything about it.

> Almost every function can fail with OutOfMemoryError and you can't do anything about it.

In fact we can - though rarely do - prove software does not have either of these mistakes. We can bound stack usage via analysis, we usually don't but it's possible.

And avoiding OOM is such a widespread concern that Rust-for-Linux deliberately makes all allocating calls explicitly fallible or offers strategies like Vec::push_within_capacity a method which, if it succeeds pushes the object into the collection, but, if it's full rather than allocate (which might fail) it gives back the object - no, you take it.

malkia 4 days ago | parent | prev | next [-]

Or have checked exceptions (Java). Granted this comes with big downer... If you need to extend functionality and new (updated) code has to throw new exception, your method signature changes :(

But the best so far method I know.

baq 4 days ago | parent [-]

Checked exceptions are not very different from the Result type in that regard TBH.

malkia 4 days ago | parent [-]

As in the caller would be forced/know what to handle in advance? Is this really the case (I'm not sure) - e.g. you call something and it returns Result<T, E> but does it really enforce it... What about errors (results) that came from deeper?

I'm not trying to defend exceptions, nor checked ones, just trying to point out that I don't think they are the same.

For all intent and purposes I really liked Common Lisp's exception handling, in my opinion the perfect one ("restartable"), but it also comes with lots of runtime efficiency (and possibly other) cost (interoperability? safety (nowadays)...) - but it was valiant effort to make programmer better - e.g. iterate while developing, and while it's throwing exceptions at you, you keep writing/updating the code (while it's running), etc - probably not something modern day SRE/devops would want (heh "who taught live updating of code of running system is great idea :)" - I still do, but I can see someone from devops frowning - "This is not the version I've pushed"...)

vlovich123 4 days ago | parent [-]

> but does it really enforce it

It warns you if you ignore handling a Result because the type is annotated with must_use (which can be a compile error in CI if you choose to enforce 0 warnings). Not that this is true with try/catch - no one forces you to actually do anything with the error.

> What about errors (results) that came from deeper?

Same as with exceptions - either you handle it or propagate it up or ignore it.

cwillu 3 days ago | parent | prev | next [-]

And with error values, you also need to hope the documentation for what the error means is correct (it isn't), and read the body of the function and every function it calls to see where the error value actually came from and what it actually means. It's the same problem, but you get to solve a bonus logic puzzle trying to figure out where the error came from.

nromiun 4 days ago | parent | prev [-]

Or catch the top level function and see every exception in your project? Tell me which language does not have a top level main function?

zaphar 4 days ago | parent | next [-]

This is the "I don't care what fails nor do I wish to handle them" option. Which for some use cases may be fine. It does mean that you don't know what kinds of failures are happening nor what the proper response to them is, though. Like it or not errors are part of your domain and properly modeling them as best you can is a part of the job. Catching at the top level still means some percentage of you users are experiencing a really bad day because you didn't know that error could happen. Error modeling reduces that at the expense of developer time.

dingi 4 days ago | parent | next [-]

Top-level error handling doesn't mean losing error details. When done well, it uses specialized exceptions and a catch–wrap–rethrow strategy to preserve stack traces and add context. Centralizing errors provides consistency, ensures all failures pass through a common pipeline for logging or user messaging, and makes policies easier to evolve without scattering handling logic across the codebase. Domain-level error modeling is still valuable where precision matters, but robust top-level handling complements it by catching the unexpected and reducing unhandled failures, striking a balance between developer effort and user experience.

zaphar 2 days ago | parent [-]

If you are actually using specialized exceptions and a catch-wrap-rethrow strategy then you are doing error modeling and you aren't "Just letting them bubble up to the top" which is basically making my point for me.

nromiun 4 days ago | parent | prev [-]

"I don't care what fails" means not catching any exception/error. My comment was the exact opposite of the idea. Top level function will bubble up every exception, no matter how deep or from which module.

_flux 4 days ago | parent [-]

But the case when you actually learn what errors can happen is when your users start complain about them, not because you somehow knew about it beforehand.

Or maybe you have 100% path coverage in your test..

nromiun 4 days ago | parent [-]

So you are talking about bugs that don't get caught in development? That happens in Rust as well. Borrow checker does not catch every bug or error. A random module you are using could throw a panic and you would not know with Rust (or any language for that matter), until your users trigger those bugs.

_flux 4 days ago | parent [-]

It sure does happen. So should we simply give up? Or should we aspire to have tools to reduce those bugs?

Knowing what kind of errors can occur is one of those tools.

johannes1234321 4 days ago | parent | prev [-]

Even better: just let it crash an get a core dump with full context information rather than some log missing information.

But often some "expected" errors can be handled in some way better (retry, ask user, use alternate approach, ...)