▲ | Cheer2171 3 days ago | |||||||||||||||||||||||||||||||
So you're saying that if you don't think about construct validity and just pick any given metric that can spit out a comparable number across all your different positions and teams, that these metrics have weird distributions? Hmm, I wonder why. | ||||||||||||||||||||||||||||||||
▲ | munk-a 3 days ago | parent [-] | |||||||||||||||||||||||||||||||
I think it's more charitable to interpret their statement as "for all metrics" rather than "run this experiment once and arbitrarily just chose a single metric". Their statement is a lot more actionable because as much as we've tried to over decades finding an accurate metric to represents performance seems to be an impossible task. A researcher friend at a previous job once mentioned that in grad school he and several other students were assisting a professor on an experiment and each grad student was given a specific molecule to evaluate in depth for fitness for a need (I forget what at this point) and one of the students had a molecule that was a good fit while the others did not - that student was credited on a major research paper and had an instant advantage in seeking employment as a researcher while the other students did not. That friend of mine was an excellent science communicator and so fell into a hybrid role of being a highly technical salesperson but tell me - what metrics of this scenario would best evaluate the researchers' relative performance? The outcome has a clear cut answer but that was entirely luck based (in a perfect world) - a lot of highly technical fields can have very smart people be stuck on very hard low margin problems while other people luck into a low difficulty problem solution that earns a company millions. | ||||||||||||||||||||||||||||||||
|