▲ | gpderetta 2 days ago | |
The libm difference might explain it, but one possible difference is that long double is 80 bit on x86 linux and 64 bit on x86 windows. My recollection is fuzzy, but IIRC the legacy x87 control word is set always to extended precision on linux, while it is on double precision on windows, and this affect normal float and double computations as well: the conversion to float and double is only done when storing to and from memory while intermediate in-register operations are always at the at the maximum enabled precision. Changing precision before each operation is expensive, so it is not done. This is one of the cause of x87 apparent nondeterminism as it depends on the compiler unpredictably spilling fp registers [1]: unless you always use the maximum enabled precision, computations might not be reproducible from one build to the other even on the same environment. [1] eventually GCC added compilation modes with deterministic behaviors, but that was well after x87 was obsolete. In the meantime people had to do with -ffloat-store and or volatile. See https://gcc.gnu.org/wiki/FloatingPointMath. edit: but you know this as you mentioned it elsethread. |