Remix.run Logo
mlyle 3 days ago

There's a little bit of a philosophical thing here -- do you adjust the earlier measurement by some function because it's usually been high and revised downwards? If you do, you need to let every user know that you're doing something different, etc.

The worst case is that both the statistics orgs and the users are adjusting the numbers for a bias and overshooting.

This means there's a certain inertia: it can be better to handle the interim reports the same, even if they've been biased one way for several years, than to introduce a change that makes the numbers not comparable to history.

> 50% error.

It's not a 50% error; it's a 50% error in the magnitude of the change.

That's like saying that my room increased from 71.4 to 71.6 degrees, but my thermometer only saw an increase from 71.4 to 71.5; therefore, my thermostat has a 50% error.

deepGem 3 days ago | parent [-]

This means there's a certain inertia: it can be better to handle the interim reports the same, even if they've been biased one way for several years, than to introduce a change that makes the numbers not comparable to history.

This is a very interesting point. So if BLS suddenly became more accurate, all the agencies have to re-tune their own biases and corrections => Could lead to short term discrepancies.

What one sees as inefficiency is actually efficient from a totally different lens.

mlyle 3 days ago | parent [-]

Yup. Obviously you want to fix the bias in the early version of the numbers eventually.

But you don't want to change what you're doing all the time, so you stay an easy product for everyone else to use.

(Interesting that this "overreport jobs in the preliminary numbers" bias has showed up; in older data using similar methodology it didn't exist, but now it seems to...)