| ▲ | ErroneousBosh 7 hours ago | |
It's probably because you have to have weird precision issues where the numbers are calculated ever so slightly differently, and some other effect like a guard being slightly too close and getting clipped by a door where that difference matters. I debugged some software synthesizer code a while back (like 20 years or so now I think of it) where a build of it on one platform failed because of a precision bug. I can't remember the details, but there was a lot of "works fine on my machine" type discussion around it. Anyway it relied on a crude simulation of an RC circuit reaching very close to 0 asymptotically to trigger a state change, but on something like 64-bit Intel with a specific processor it never quite made it low enough to trip the comparison because of something to do with not flushing denormals. From an electronic standpoint, making it simulate "it's high enough" as being about 0.7 and " it's low enough" being about 0.01 was far closer to the instrument they were trying to simulate, and making it massively imprecise like that got it going on everything. | ||
| ▲ | lomase 26 minutes ago | parent [-] | |
Is funny because the only code I have read that flushed denormals was in synth code. | ||