| ▲ | nritchie 6 hours ago | |||||||
A handful of the comments are skeptical of the utility of this method. I can tell you as a physical scientist, it is common to make the same measurement with a number of measuring devices of differing precision. (e.g. developing a consensus standard using a round-robin.) The technique Cook suggests can be a reasonable way to combine the results to produce the optimal measured value. | ||||||||
| ▲ | shoo 3 hours ago | parent | next [-] | |||||||
I wonder if this minimum variance approach of averaging the measurements agrees with the estimate of the expected value we'd get from a Bayesian approach, at least in a simple scenario, say a uniform prior over the thing we're measuring and assume that our two measuring devices have unbiased errors described by normal distributions. | ||||||||
| ||||||||
| ▲ | sfpotter 5 hours ago | parent | prev | next [-] | |||||||
I'm not a physical scientist, but I spend a lot of time assessing the performance of numerical algorithms, which is maybe not totally dissimilar to measuring a physical process with a device. I've gotten good results applying Simple and Stupid statistical methods. I haven't tried the method described in this article, but I'm definitely on the lookout for an application of it now. | ||||||||
| ▲ | geon 2 hours ago | parent | prev [-] | |||||||
Like a Kalman filter? | ||||||||