| ▲ | CSMastermind 10 months ago |
| IQ scores have proven highly correlated to educational achievement, occupational attainment, career advancement, lifetime earnings, brain volume, cortical thickness, health, longevity, and more. To the point of being accurate predictors of these things even when controlling for things like socioeconomic background. It's used because it works as a measuring tool, how the tests are constructed is largely irrelevant to the question of if the outcome of the test is an accurate predictor of things we care about. If you think you have a better measuring tool you should propose it and win several awards and accolades. No one has found one yet in spite of many smart people trying for decades. |
|
| ▲ | ianbicking 10 months ago | parent | next [-] |
| I'm not saying the ranking is necessarily wrong, but that turning the ranking into a distribution is constructed. And it MIGHT be a correct construction, but I am less confident that is true. The distribution implies something like "someone at 50% is not that different than someone at 55%" but "someone at 90% is very different from 95%". That is: the x axis implies there's some unit of intelligence, and the actual intelligence of people in the middle is roughly similar despite ranking differences. That distribution also implies that when you get to the extremities the ranking reflects greater differences in intelligence. |
| |
| ▲ | HDThoreaun 10 months ago | parent | next [-] | | The distribution implies that a score of 100 means you did better than half the population, and that a score of 130 means you did 2 standard deviations better than the population ie. better than 95% of other people. We have no objective measure of IQ so we use relative rankings. If you used a uniform distribution for iq everyone currently above 145 would have 99 out of 100 IQ. Normal distribution is useful when you want to differentiate points in the tails | |
| ▲ | Glyptodon 10 months ago | parent | prev [-] | | It does seem like you should assume the accuracy of the result decreases as you get away from the norm of an IQ test, though I have no idea if it's been validated. But particularly if there are mistakes on the test questions or any kinds of ambiguity in any of the questions, it seems like you'd expect that. Like if you have two different IQ tests and someone takes one, and gets 100, if 100 is normed to the 50th percentile, maybe you have 95% confidence that on the next test they're also getting 100 +/- 2.5. But if they get 140, that's normed to like 99th percentile, maybe your 95% confidence interval for the next test is 140 +/- 12.5. (I really don't know, I just suspect that the higher the percentile someone gets, the less confidence you'd have and mostly know stats from physical and bio science labs, not from IQ or human evaluation contexts.) |
|
|
| ▲ | jprete 10 months ago | parent | prev [-] |
| The GP is saying that IQ tests are deliberately calibrated and normalized to produce a Gaussian output, and that the input is not necessarily a Gaussian distribution of any particular quantity. This doesn't say anything in particular about whether it's useful, just that people should be careful interpreting the values directly. |
| |
| ▲ | lokar 10 months ago | parent [-] | | Exactly. This is a criticism of the article where it says that HR has a good reason for assuming employee performance would be Gaussian, since IQ is Gaussian. IQ is defined a being Gaussian | | |
| ▲ | ip26 10 months ago | parent [-] | | If IQ is a good predictor of employee performance, then it does follow that employee performance would be Gaussian. It doesn’t matter that IQ was “made” to be Gaussian. | | |
| ▲ | Majromax 10 months ago | parent | next [-] | | Not necessarily. A "good predictor" could still result in non-Gaussian performance for at least two reasons: 1. The prediction could be on a relative rather than quantitative basis. If IQ(A) > IQ(B) always implies Perf(A) > Perf(B), then the absolute distributions for each could still be arbitrary. 2. A "good predictor" in the social sciences doesn't always mean that it explains a large part of the variance. If IQ quantitative correlates with observed performance on some scale, but it explains only 25% of the variance, then the distributions could still be quite different. Furthermore, if you're making this kind of quantitative comparison you must also have quantitative performance measurement, whereupon its probability density function should be much easier to directly estimate. | |
| ▲ | lokar 10 months ago | parent | prev [-] | | Are you assuming that employee performance is Gaussian? |
|
|
|