Remix.run Logo
ridiculous_fish 5 days ago

My favorite FP Fun Fact is that float comparisons can (almost) use integer comparisons. To determine if a > b, reinterpret a and b as signed ints and just compare those like any old ints. It (almost) works!

The implication is that the next biggest float is (almost) always what you get when you reinterpret its bits as an integer, and add one. For example, start with the zero float: all bits zero. Add one using integer arithmetic. In int-speak it's just one; in float-speak it's a tiny-mantissa denormal. But that's the next float; and `nextafter` is implemented using integer arithmetic.

Learning that floats are ordered according to integer comparisons makes it feel way more natural. But of course there's the usual asterisks: this fails with NaNs, infinities, and negative zero. We get a few nice things, but only a few.

Sniffnoy 5 days ago | parent | next [-]

This isn't accurate. It's true for positive numbers, and when comparing a positive to a negative, but false for comparisons between negative numbers. Standard floating point uses sign-magnitude representation, while signed integers these days use 2s-complement. On negative numbers, comparisons are reversed between these two encodings. Incrementing a float as if it were an integer will, in ordinary circumstances, get you the next one larger in magnitude, but with the same sign -- i.e., you go up for positives but down for negatives. Whereas with signed integers, you always go up except when there's an overflow into the sign bit.

A more correct version of the statement would be that comparison is the same as on sign-magnitude integers. Of course, this still has the caveats you already mentioned.

adgjlsfhk1 5 days ago | parent | next [-]

One of the unambiguously nice things about Posits (unlike floats) is that they use a 2s compliment scheme which makes it actually true for all values that they sort like integers.

Sniffnoy 5 days ago | parent [-]

I was going to say "well, you've still got NaR", but apparently that's been defined to sort less than all other posits? Huh, OK.

adgjlsfhk1 4 days ago | parent [-]

yeah. having a total order just makes everything so much nicer. total order is one of the defining properties of the reals, and realistically if the user calls sort (or puts one in a B-tree), you have to put the NaNs at one side or the other (unless you're C/C++ and allow that to launch the nukes)

ridiculous_fish 5 days ago | parent | prev [-]

You're right, thank you for the correction.

Sharlin 4 days ago | parent | prev | next [-]

For what it's worth, here's the Rust std implementation [1] of the total (as in, makes NaNs comparable) comparison algorithm given by IEE 751:

  let mut left = self.to_bits() as i32;
  let mut right = other.to_bits() as i32;

  // In case of negatives, flip all the bits except the sign
  // to achieve a similar layout as two's complement integers

  left ^= (((left >> 31) as u32) >> 1) as i32;
  right ^= (((right >> 31) as u32) >> 1) as i32;

  left.cmp(&right)
[1] https://doc.rust-lang.org/src/core/num/f32.rs.html#1348
splicer 5 days ago | parent | prev [-]

Thanks for HexFiend man! I use several times a week.

ForOldHack 2 days ago | parent [-]

I seriously do NOT want the two plus hours back I spent diddling ( 1 bit not ) all those bits. I first learned it in binary on an HP, and then had to learn it on an IBM mainframe, and then got an 8087, and it was so incredibly fast, but error crept in for Fourier transforms. Started using extended double to keep the errors at bay. Hilbert spaces here we came.

The killer app was not Lotus 1-2-3 v2, but Turbo Pascal w/ 8087 support. It screamed through tensors and 3D spaces, places we only saw on plotters.

It was not until I got a G3 and used Graphing calculator that I could explore sombrero functions of increasing frequency.

Floating point math is essential, not an option.