| ▲ | bhickey 3 days ago |
| For analyses like this it just doesn't matter. Pick a metric and measure it over your workforce. Across the universe of salient metrics of interest you won't see a gaussian across your workforce. In a previous job I modelled this and concluded that due to measurement error and year-over-yead enrichment, Welchian rank-and-yank results in firing people at random. |
|
| ▲ | pembrook 3 days ago | parent | next [-] |
| All of Jack Welch’s management tactics should be considered suspect now. His performance at GE was 100% fueled by financial leveraging that blew up in 2009, basically killing the company. Nobody should be taking management lessons from this guy. |
| |
| ▲ | lotsofpulp 3 days ago | parent [-] | | > Nobody should be taking management lessons from this guy. Rank and yank is simply about lowering labor costs, once the business has achieved a significant moat and no longer needs to focus solely on growing revenues. A negotiating tool for the labor buyer, due to the continuous threat of termination. |
|
|
| ▲ | bhouston 3 days ago | parent | prev | next [-] |
| Stack ranking will tell you when something isn't working, but the solution isn't always to fire, but rather use that data to fix things in a more general solution. I found that team composition and role assignment matters a lot, at least if you hire people who are at least above a certain bar. Match a brilliant non-assertive coder with someone who is outgoing and good at getting along and at least decent coder, and the results from the two outperform generally either of them individually. You can bring out the best of your employees or you can set them up against each other. This either brings everyone up or brings everyone down. |
| |
| ▲ | dataflow 3 days ago | parent [-] | | Wholeheartedly agree with you on team composition mattering a ton, but how often do you have such an abundance of engineers and tasks that you can match them up the right way? | | |
| ▲ | bhouston 3 days ago | parent [-] | | I think if you get to know your engineers, you can figure out the right pairings to bring out the best. But this requires intimate knowledge and probably subjective based on how good the manager is at managing coders. So I guess from up high, stack ranking-based firing is easier. But I think it is also cheaper to make great teams rather than just doing brutal firings all the time. But it may be a micro-optimization? |
|
|
|
| ▲ | Cheer2171 3 days ago | parent | prev [-] |
| So you're saying that if you don't think about construct validity and just pick any given metric that can spit out a comparable number across all your different positions and teams, that these metrics have weird distributions? Hmm, I wonder why. |
| |
| ▲ | munk-a 3 days ago | parent [-] | | I think it's more charitable to interpret their statement as "for all metrics" rather than "run this experiment once and arbitrarily just chose a single metric". Their statement is a lot more actionable because as much as we've tried to over decades finding an accurate metric to represents performance seems to be an impossible task. A researcher friend at a previous job once mentioned that in grad school he and several other students were assisting a professor on an experiment and each grad student was given a specific molecule to evaluate in depth for fitness for a need (I forget what at this point) and one of the students had a molecule that was a good fit while the others did not - that student was credited on a major research paper and had an instant advantage in seeking employment as a researcher while the other students did not. That friend of mine was an excellent science communicator and so fell into a hybrid role of being a highly technical salesperson but tell me - what metrics of this scenario would best evaluate the researchers' relative performance? The outcome has a clear cut answer but that was entirely luck based (in a perfect world) - a lot of highly technical fields can have very smart people be stuck on very hard low margin problems while other people luck into a low difficulty problem solution that earns a company millions. | | |
| ▲ | withinboredom 3 days ago | parent [-] | | Most of the world is ruled by luck. Where you are born, who your parents, how rich they are, who you know, whether or not someone “better” than you applies for the same position, etc. etc. Ignoring luck or trying to control for it would be a mistake. | | |
| ▲ | munk-a 3 days ago | parent | next [-] | | Ignoring luck is a requirement - conditions born from luck may be worth consideration but past luck is not a predictor of future luck. I'd clarify - trying to ignore someone's education because it's a result of their citizenship or the wealth of their family is going to be endlessly frustrating... but if your metrics can't exclude luck and happenstance during the execution of the task then they're not worth much of anything. | | |
| ▲ | withinboredom a day ago | parent [-] | | You can literally get a 1:1 model of capitalism by modeling luck alone. To ignore it is to pretend that you have some kind of control of luck. |
| |
| ▲ | 3 days ago | parent | prev [-] | | [deleted] |
|
|
|