▲ | jcranmer 2 days ago | |
The differences are almost certainly not in how the two OSes rounded floats--the IEEE rounding modes are standard, and almost no one actually bothers to even change the rounding mode from the default. For cross-OS issues, the most likely culprit is that Windows and Linux are using different libm implementations, which means that the results of functions like sin or atan2 are going to be slightly different. | ||
▲ | gpderetta 2 days ago | parent | next [-] | |
The libm difference might explain it, but one possible difference is that long double is 80 bit on x86 linux and 64 bit on x86 windows. My recollection is fuzzy, but IIRC the legacy x87 control word is set always to extended precision on linux, while it is on double precision on windows, and this affect normal float and double computations as well: the conversion to float and double is only done when storing to and from memory while intermediate in-register operations are always at the at the maximum enabled precision. Changing precision before each operation is expensive, so it is not done. This is one of the cause of x87 apparent nondeterminism as it depends on the compiler unpredictably spilling fp registers [1]: unless you always use the maximum enabled precision, computations might not be reproducible from one build to the other even on the same environment. [1] eventually GCC added compilation modes with deterministic behaviors, but that was well after x87 was obsolete. In the meantime people had to do with -ffloat-store and or volatile. See https://gcc.gnu.org/wiki/FloatingPointMath. edit: but you know this as you mentioned it elsethread. | ||
▲ | zokier 2 days ago | parent | prev [-] | |
Problem with rounding modes and other fpenv flags is that any library anywhere might flip some flag and suddenly the whole program changes behavior. |