▲ | rendaw 4 days ago | |||||||||||||||||||||||||
While it does sound like GP missed a distinction, I don't see how (-2.35, 2.35) would be sensible. The extremes can happen (or else they wouldn't be part of the input intervals) and the code has to sensibly deal with that event in order to be correct. | ||||||||||||||||||||||||||
▲ | esrauch 4 days ago | parent | next [-] | |||||||||||||||||||||||||
The reason is that the uniform distribution is very rare. Nearly no real world scenario were something is equally likely to be the values 2, 0 and -2, and where it's literally impossible to be -2.01. It exists but it's not the normal case. In noisy sensors case there's some arbitrary low probability of them being actually super wrong, if you go by true 10^-10 outlier bounds they will be useless for any practical use, while the 99% confidence range is a relatively small rent. More often you want some other distribution and say (-2, 2) and those are the 90th percentile interval not the absolute bounds, 0 is more likely than -2 and -3 is possible but rare. It's not bounds, you can ask you model for your 99th or 99.9th percentile value or whatever tolerance you want and get something outside of (-2,2). | ||||||||||||||||||||||||||
▲ | kccqzy 4 days ago | parent | prev | next [-] | |||||||||||||||||||||||||
Interval arithmetic isn't useful because it only tells you the extreme values, but not how likely these values are. So you have to interpret them as uniform random. Operations like multiplications change the shape of these distributions, so then uniform random isn't applicable any more. Therefore interval arithmetic basically has an undefined underlying distribution that can change easily without being tracked. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | Dylan16807 2 days ago | parent | prev [-] | |||||||||||||||||||||||||
-2 and 2 were not the extremes to begin with. |