| ▲ | esafak 5 hours ago |
| > "Modern" languages try to avoid exceptions by using sum types and pattern matching plus lots of sugar to make this bearable. I personally dislike both exceptions and its emulation via sum types. ... I personally prefer to make the error state part of the objects: Streams can be in an error state, floats can be NaN and integers should be low(int) if they are invalid. Special values like NaN are half-assed sum types. The latter give you compiler guarantees. |
|
| ▲ | SJMG 2 hours ago | parent | next [-] |
| Not a defense of the poison value approach, but in this thread Araq (Nim's principal author) lays out his defense for exceptions. https://forum.nim-lang.org/t/9596#63118 |
|
| ▲ | kace91 5 hours ago | parent | prev | next [-] |
| I’d like to see their argument for it. I see no help in pushing NaN as a number through a code path corrupting all operations it is part of, and the same is true for the others. |
| |
| ▲ | snek_case 3 hours ago | parent | next [-] | | The reason NaN exists is for performance AFAIK. i.e. on a GPU you can't really have exceptions. You don't want to be constantly checking "did this individual floating-point op produce an error?" It's easier and faster for the individual floating point unit to flag the output as a NaN. Obviously NaNs long predate GPUs, but floating-point support was also hardware accelerated in a variety of ways for a long time. That being said, I agree that the way NaNs propagate is messy. You can end up only finding out that there was an error much later during the program's execution and then it can be tricky to find out where it came from. | |
| ▲ | cb321 4 hours ago | parent | prev [-] | | There is no direct argument/guidence that I saw for "when to use them", but masked arrays { https://numpy.org/doc/stable/reference/maskedarray.html } (an alternative to sentinels in array processing sub-languages) have been in NumPy (following its antecedents) from its start. I'm guessing you could do a code-search for its imports and find arguments pro & con in various places surrounding that. From memory, I have heard "infecting all downstream" as both "a feature" and "a problem". Experience with numpy programs did lead to sentinels in the https://github.com/c-blake/nio Nim package, though. Another way to try to investigate popularity here is to see how much code uses signaling NaN vs. quiet NaN and/or arguments pro/con those things / floating point exceptions in general. I imagine all of it comes down to questions of how locally can/should code be forced to confront problems, much like arguments about try/except/catch kinds of exception handling systems vs. other alternatives. In the age of SIMD there can be performance angles to these questions and essentially "batching factors" for error handling that relate to all the other batching factors going on. Today's version of this wiki page also includes a discussion of Integer Nan: https://en.wikipedia.org/wiki/NaN . It notes that the R language uses the minimal signed value (i.e. 0x80000000) of integers for NA. There is also the whole database NULL question: https://en.wikipedia.org/wiki/Null_(SQL) To be clear, I am not taking some specific position, but I think all these topics inform answers to your question. I think it's something with trade-offs that people have a tendency to over-simplify based on a limited view. | | |
| ▲ | kace91 3 hours ago | parent [-] | | >To be clear, I am not taking some specific position, but I think all these topics inform answers to your question. I think it's something with trade-offs that people have a tendency to over-simplify based on a limited view. That's fair, I wasn't dimsissing the practice but rather just commenting that it's a shame the author didn't clarify their preference. I don't think the popularity angle is a good proxy for usefulness/correction of the practice. Many factors can influence popularity. Performance is a very fair point, I don't know enough to understand the details but I could see it being a strong argument. It is counter intuitive to move forward with calculations known to be useless, but maybe the cost of checking all calculations for validity is larger than the savings of skipping early the invalid ones. There is a catch though. Numpy and R are very oriented to calculation pipelines, which is a very different usecase to general programming, where the side effects of undetected 'corrupt' values can be more serious. | | |
| ▲ | cb321 3 hours ago | parent [-] | | The conversation around Nim for the past 20 years has been rather fragmented - IRC channels, Discord channels (dozens, I think), later the Forum, Github issue threads, pull request comment threads, RFCs, etc. Araq has a tendency to defend his ideas in one venue (sometimes quite cogently) and leave it to questioners to dig up where those trade-off conversations might be. I've disliked the fractured nature of the conversation for the 10 years I've known about it, but assigned it to a kind of "kids these days, whachagonnado" status. Many conversations (and life!) are just like that - you kind of have to "meet people where they are". Anyway, this topic of "error handling scoping/locality" may be the single most cross-cutting topic across CPUs, PLangs, Databases, and operating systems (I would bin Numpy/R under Plangs+Databases as they are kind of "data languages"). Consequently, opinions can be very strong (often having this sense of "Everything hinges on this!") in all directions, but rarely take a "complete" view. If you are interested in "fundamental, not just popularity" discussions, and it sounds like you are, I feel like the database community discussions are probably the most "refined/complete" in terms of trade-offs, but that could simply be my personal exposure, and DB people tend to ignore CPU SIMD because it's such a "recent" innovation (hahaha, Seymore Cray was doing it in the 1980s for the Cray-3 Vector SuperComputer). Anyway, just trying to help. That link to the DB Null page I gave is probably a good starting point. |
|
|
|
|
| ▲ | saghm an hour ago | parent | prev | next [-] |
| Yeah, I'm not sure I've ever seen NaN called or as an example to be emulated before, rather than something people complain about. |
| |
| ▲ | echelon 34 minutes ago | parent [-] | | Holy shit, I'd love to see NaN as a proper sum type. That's the way to do it. That would fix everything. |
|
|
| ▲ | elcritch 5 hours ago | parent | prev [-] |
| The compiler can still enforce checks, such as with nil checks for pointers. In my opinion it’s overall cleaner if the compiler handles enforcing it when it can. Something like “ensure variable is initialized” can just be another compiler check. Combined with an effects system that lets you control which errors to enforce checking on or not. Nim has a nice `forbids: IOException` that lets users do that. |
| |
| ▲ | ux266478 4 hours ago | parent | next [-] | | Both of these things respectively are just pattern matches and monads, just not user-definable ones. | |
| ▲ | umanwizard 5 hours ago | parent | prev [-] | | > The compiler can still enforce checks, such as with nil checks for pointers. Only sometimes, when the compiler happens to be able to understand the code fully enough. With sum types it can be enforced all the time, and bypassed when the programmer explicitly wants it to be. | | |
| ▲ | wavemode 4 hours ago | parent [-] | | There's nothing preventing this for floats and ints in principle. e.g. the machine representation could be float, but the type in the eyes of the compiler could be `float | nan` until you check it for nan (at which point it becomes `float`). Then any operation which can return nan would return `float | nan` instead. tbh this system (assuming it works that way) would be more strict at compile-time than the vast majority of languages. |
|
|