Remix.run Logo
skeezyboy 4 days ago

> Define boundary conditions -- how much precision do you need?

imagine if integer arithmetic gave wrong answers in certain conditions lol why did we choose the current compromise?

ForOldHack 2 days ago | parent | next [-]

Compromises. We had BCD for finance, binary for games, and floating point for math. I wrote a sample 'make change' using floating, BCD, and integer( normalizing by multiplying by 100). The integer ripped thru it, but surprisingly BCD kept up with FP, and with compiler optimizations, in certain edge cases and unit tests was significantly faster.

You get surprising things with common place problems.

cindyllm 2 days ago | parent [-]

[dead]

kbolino 4 days ago | parent | prev | next [-]

In my experience, most code that operates on integers does not anticipate overflow or wraparound. So it is almost always guaranteed to produce wrong results when these conditions occur, and is only saved by the fact that usually they doesn't occur in practice.

It is odd to me that every major CPU instruction set has ALU codes to indicate when these conditions have occurred, and yet many programming languages ignore them entirely or make it hard to access them. Rust at least has the quartet of saturating, wrapping, checked, and unchecked arithmetic operations.

ForOldHack 2 days ago | parent [-]

The trick is to get your ALUs to do some of the math for you. Oh I miss the days of the 68020 fast barrel shifter and the 68030 byte smears. Tricky stuff lost to the silicon/sands of time.

mwkaufma 4 days ago | parent | prev [-]

They're not "wrong" -- the error bars are well-defined.

Signed Integer Overflow OTOH is Undefined Behavior, so it's worse.