▲ | stuxnet79 4 days ago | |||||||
> On good tests, your score doesn't change much with practice, so the system is less vulnerable to Goodharting and people don't waste/spend a bunch of time gaming it This framing of the problem is deeply troubling to me. A good test is one that evaluates candidates on the tasks that they will do at the workplace and preferably connects those tasks to positive business outcomes. If a candidate's performance improves with practice, then so what? The only thing we should care about is that the interview performance reflects well on how the candidate will do within the company. Skill is not a univariate quantity that doesn't change with time. Also it's susceptible to other confounding variables which negatively impact performance. It doesn't matter if you hire the smartest devs. If the social environment and quality of management is poor, then the work performance will be poor as well. | ||||||||
▲ | wyager 2 days ago | parent [-] | |||||||
> A good test is one that evaluates candidates on the tasks that they will do at the workplace Systematizing this is not feasible. The next best thing (in terms of predictive power for future job success) is direct IQ tests, which are illegal in the US. Next best thing after that are IQ proxies like coding puzzle ability. > If a candidate's performance improves with practice, then so what? It means the test isn't measuring anything useful. The extremely broad spectrum skills that benefit a software/eng role aren't something you can "practice". > The only thing we should care about is that the interview performance reflects well on how the candidate will do within the company. Agreed, which any Goodhartable test will never do. | ||||||||
|