Remix.run Logo
mizmar 4 days ago

There is another way to compare floats for rough equality that I haven't seen much explored anywhere: bit-cast to integer, strip few least significant bits and then compare for equality. This is agnostic to magnitude, unlike epsilon which has to be tuned for range of values you expect to get a meaningful result.

twic 13 hours ago | parent | next [-]

This doesn't work. For any number of significant bits, there are pairs of numbers one machine epsilon apart which will truncate to different values.

SideQuark 12 hours ago | parent | prev | next [-]

Completely worked out at least 20 years ago: https://www.lomont.org/papers/2005/CompareFloat.pdf

fn-mote 11 hours ago | parent | next [-]

Note for the skeptic: this cites Knuth, Volume II, writes out the IEEE edge cases, and optimizes.

mizmar 8 hours ago | parent | prev [-]

Great reading, thanks. Describes how to handle ±0, works with difference to avoid truncation errors. First half of the paper is arriving at this correct snippet, second part of the paper is about optimizing it.

    bool DawsonCompare(float af, float bf, int maxDiff)
    {
        int ai = *reinterpret_cast<int*>(&af);
        int bi = *reinterpret_cast<int*>(&bf);
        if (ai < 0)
            ai = 0x80000000 - ai;
        if (bi < 0)
            bi = 0x80000000 - bi;
        int diff = ai - bi;
        if (abs(diff) < maxDiff)
            return true;
        return false;
    }
ethan_smith 11 hours ago | parent | prev | next [-]

This is essentially ULP (units in the last place) comparison, and it's a solid approach. One gotcha: IEEE 754 floats have separate representations for +0 and -0, so values straddling zero (like 1e-45 and -1e-45) will look maximally far apart as integers even though they're nearly equal. You need to handle the sign bit specially.

kbolino 6 hours ago | parent [-]

There's another gotcha. Consider positive, normal x and y where ulp(y) != ulp(x). Bitwise comparison, regardless of tolerance, will consider x to be far from y, even though they might be adjacent numbers, e.g. if y = x+ulp(x) but y is a power of 2.

ack_complete 4 hours ago | parent | next [-]

This case actually works because for finite numbers of a given sign, the integer bit representations are monotonic with the value due to the placement of the exponent and mantissa fields and the implicit mantissa bit. For instance, 1.0 in IEEE float is 0x3F800000, and the next immediate representable value below it 1.0-e is 0x3F7FFFFF.

Signed zero and the sign-magnitude representation is more of an issue, but can be resolved by XORing the sign bit into the mantissa and exponent fields, flipping the negative range. This places -0 adjacent to 0 which is typically enough, and can be fixed up for minimal additional cost (another subtract).

kbolino 3 hours ago | parent [-]

I interpreted OP's "bit-cast to integer, strip few least significant bits and then compare for equality" message as suggesting this kind of comparison (Go):

  func equiv(x, y float32, ignoreBits int) bool {
      mask := uint32(0xFFFFFFFF) << ignoreBits
      xi, yi := math.Float32bits(x), math.Float32bits(y)
      return xi&mask == yi&mask
  }
with the sensitivity controlled by ignoreBits, higher values being less sensitive.

Supposing y is 1.0 and x is the predecessor of 1.0, the smallest value of ignoreBits for which equiv would return true is 24.

But a worst case example is found at the very next power of 2, 2.0 (bitwise 0x40000000), whose predecessor is quite different (bitwise 0x3FFFFFFF). In this case, you'd have to set ignoreBits to 31, and thus equivalence here is no better than checking that the two numbers have the same sign.

oasisaimlessly 4 hours ago | parent | prev [-]

I don't think this is true. Modulo the sign bit, the "next float" operator is equivalent to the next bitstring or the integer++.

kbolino 2 hours ago | parent [-]

Sure, but that operator can propagate a carry all the way to the most significant bit, so a check for bitwise equality after "strip[ping] few least significant bits" will yield false in some cases. The pathologically worst case for single precision, for example, is illustrated by the value 2.0 (bitwise 0x40000000) and its predecessor, which differ in all bits except the sign.

andyjohnson0 12 hours ago | parent | prev | next [-]

> strip few least significant bits

I'm unconvinced. Doesnt this just replace the need to choose a suitable epsilon with the need to choose the right number of bits to strip? With the latter affording much fewer choices for degree of "roughness" than does the former.

chaboud 9 hours ago | parent [-]

Not quite. It's basically a combined mantissa and exponent test, so it can be thought of as functionally equivalent to scaling epsilon by a power of two (the shared exponent of the nearly equal floating point values) and then using that epsilon.

I think I'll just use scaled epsilon... though I've gotten lots of performance wins out of direct bitwise trickery with floats (e.g., fast rounding with mantissa normalization and casting).

StilesCrisis 11 hours ago | parent | prev | next [-]

Rather than stripping bits, you can just compare if the bit-casted numbers are less than N apart (choose an appropriate N that works for your data; a good starting point is 4).

This breaks down across the positive/negative boundary, but honestly, that's probably a good property. -0.00001 is not all that similar to +0.00001 despite being close on the number line.

It also requires that the inputs are finite (no INF/NAN), unless you are okay saying that FLT_MAX is roughly equal to infinity.

hansvm 8 hours ago | parent | prev [-]

That works well for sorting/bucketing/etc in a few places, but as a comparison it's prone to false negatives (your values are close and computed to not be close), so you're restricted to algorithms tolerant of that behavior.