Remix.run Logo
cb321 4 hours ago

There is no direct argument/guidence that I saw for "when to use them", but masked arrays { https://numpy.org/doc/stable/reference/maskedarray.html } (an alternative to sentinels in array processing sub-languages) have been in NumPy (following its antecedents) from its start. I'm guessing you could do a code-search for its imports and find arguments pro & con in various places surrounding that.

From memory, I have heard "infecting all downstream" as both "a feature" and "a problem". Experience with numpy programs did lead to sentinels in the https://github.com/c-blake/nio Nim package, though.

Another way to try to investigate popularity here is to see how much code uses signaling NaN vs. quiet NaN and/or arguments pro/con those things / floating point exceptions in general.

I imagine all of it comes down to questions of how locally can/should code be forced to confront problems, much like arguments about try/except/catch kinds of exception handling systems vs. other alternatives. In the age of SIMD there can be performance angles to these questions and essentially "batching factors" for error handling that relate to all the other batching factors going on.

Today's version of this wiki page also includes a discussion of Integer Nan: https://en.wikipedia.org/wiki/NaN . It notes that the R language uses the minimal signed value (i.e. 0x80000000) of integers for NA.

There is also the whole database NULL question: https://en.wikipedia.org/wiki/Null_(SQL)

To be clear, I am not taking some specific position, but I think all these topics inform answers to your question. I think it's something with trade-offs that people have a tendency to over-simplify based on a limited view.

kace91 3 hours ago | parent [-]

>To be clear, I am not taking some specific position, but I think all these topics inform answers to your question. I think it's something with trade-offs that people have a tendency to over-simplify based on a limited view.

That's fair, I wasn't dimsissing the practice but rather just commenting that it's a shame the author didn't clarify their preference.

I don't think the popularity angle is a good proxy for usefulness/correction of the practice. Many factors can influence popularity.

Performance is a very fair point, I don't know enough to understand the details but I could see it being a strong argument. It is counter intuitive to move forward with calculations known to be useless, but maybe the cost of checking all calculations for validity is larger than the savings of skipping early the invalid ones.

There is a catch though. Numpy and R are very oriented to calculation pipelines, which is a very different usecase to general programming, where the side effects of undetected 'corrupt' values can be more serious.

cb321 3 hours ago | parent [-]

The conversation around Nim for the past 20 years has been rather fragmented - IRC channels, Discord channels (dozens, I think), later the Forum, Github issue threads, pull request comment threads, RFCs, etc. Araq has a tendency to defend his ideas in one venue (sometimes quite cogently) and leave it to questioners to dig up where those trade-off conversations might be. I've disliked the fractured nature of the conversation for the 10 years I've known about it, but assigned it to a kind of "kids these days, whachagonnado" status. Many conversations (and life!) are just like that - you kind of have to "meet people where they are".

Anyway, this topic of "error handling scoping/locality" may be the single most cross-cutting topic across CPUs, PLangs, Databases, and operating systems (I would bin Numpy/R under Plangs+Databases as they are kind of "data languages"). Consequently, opinions can be very strong (often having this sense of "Everything hinges on this!") in all directions, but rarely take a "complete" view.

If you are interested in "fundamental, not just popularity" discussions, and it sounds like you are, I feel like the database community discussions are probably the most "refined/complete" in terms of trade-offs, but that could simply be my personal exposure, and DB people tend to ignore CPU SIMD because it's such a "recent" innovation (hahaha, Seymore Cray was doing it in the 1980s for the Cray-3 Vector SuperComputer). Anyway, just trying to help. That link to the DB Null page I gave is probably a good starting point.