Remix.run Logo
vbezhenar 4 days ago

Every function can fail with StackOverflowError and you can't do anything about it.

Almost every function can fail with OutOfMemoryError and you can't do anything about it.

I've accepted that everything can fail. Just write code and expect it to throw. Write programs and expect them to abort.

I don't understand this obsession with error values. I remember when C++ designers claimed, that exceptions provide faster code execution for happy path, so even for systems language they should be preferred. Go error handling is bad. Rust error handling is bad. C error handling is bad, but at least that's understandable.

frumplestlatz 4 days ago | parent | next [-]

This is silly. We can avoid stack overflow by avoiding unbounded recursion.

In user-space, memory overcommit means that we will almost or literally never see an out of memory error.

In kernel space and other constrained environments, we can simply check for allocation, failure and handle it accordingly.

This is a terrible argument for treating all code as potentially failing with any possible error condition.

vbezhenar 3 days ago | parent [-]

> We can avoid stack overflow by avoiding unbounded recursion.

Only if you control the entire ecosystem, from application to every library you're ever going to use. And will not work for library code anyway.

> In user-space, memory overcommit means that we will almost or literally never see an out of memory error.

Have you ever deployed an application to production? Where it runs in the container, with memory limits. Or just good old ulimit. Or any language with VM or GC which provides explicit knobs to limit heap size (and these knobs are actually used by deployers).

> This is a terrible argument for treating all code as potentially failing with any possible error condition.

This is reality. Some programs can ignore it, hand-waving possibilities out, expecting that it won't happen and it's not your problem. That's one approach, I guess.

frumplestlatz 3 days ago | parent [-]

> Only if you control the entire ecosystem, from application to every library you're ever going to use. And will not work for library code anyway.

Two edge cases existing is a terrible argument for creating a world in which any possible edge case must also be accounted for.

>> This is a terrible argument for treating all code as potentially failing with any possible error condition.

> This is reality. Some programs can ignore it, hand-waving possibilities out, expecting that it won't happen and it's not your problem. That's one approach, I guess.

No, it’s not reality. It’s the mess you create when you work in languages with untyped exception handling and with people that insist on writing code the way you suggest.

whatevaa 3 days ago | parent | prev | next [-]

Rust and GO has panics for these. Most of the time there is nothing application can do by itself, either there is a bug in application or there is actually shortage of memory and only the OS can do anything about it.

I'm not talking about embedded or kernels. Different stories.

malkia 4 days ago | parent | prev | next [-]

^^^ - This - my recent one, came to the realization that dealing with memory mapped files is much harder without exceptions (not that exceptions make it easier, but at least possible).

Why? Let's say you've opened a memory mapped file, you've got pointer, and hand this pointer down to some library - "Here work there" - the library thinks - oh, it's normal memory - fine! And then - physical block error happens (whether it's Windows, OSX, Linux, etc.) - and now you have to handle this from... a rather large distance - where "error code" handling is not enough - and you have to use signal handling with SIGxxx or Windows SEH handling, or whatever the OS provides

And then you have languages like GoLang/Rust/others where this is a pain point (yes you can handle it), but how well?

If you look in ReactOS the code is full with `__try/__except` - https://github.com/search?q=repo%3Areactos%2Freactos+_SEH2_T... - because user provided memory HAVE to be checked - you don't want exception happening at the kernel reading bad user memory.

So it's all good and fine, until you have to face this problem... Or decide to not use mmap files (is this even possible?).

Okay, I know it's just a silly little thing I'm pointing here - but I don't know of any good solution off hand...

And even handling this in C/C++ with all SEH capabilities - it still sucks...

vlovich123 4 days ago | parent [-]

If the drive fails and you get a signal it’s perfectly valid to just let the default signal handler crash your process. Signals by definition are delivered non-locally, asynchronously, and there’s generally nothing to try/catch or recover. So handling this in Rust is no different than any other language because these kinds of failures never result in locally handleable errors.

malkia 2 days ago | parent [-]

That's not true - you can handle this pretty well with exceptions (yes it's nagging that you have to add them, but doable)... Not so much without.

tialaramex 4 days ago | parent | prev [-]

> Every function can fail with StackOverflowError and you can't do anything about it.

> Almost every function can fail with OutOfMemoryError and you can't do anything about it.

In fact we can - though rarely do - prove software does not have either of these mistakes. We can bound stack usage via analysis, we usually don't but it's possible.

And avoiding OOM is such a widespread concern that Rust-for-Linux deliberately makes all allocating calls explicitly fallible or offers strategies like Vec::push_within_capacity a method which, if it succeeds pushes the object into the collection, but, if it's full rather than allocate (which might fail) it gives back the object - no, you take it.