▲ | jerf 2 days ago | |||||||
They generally do not know what they are looking for. They are generally untrained, and if they are trained, the training is probably all about using leetcode-type problems to give out interviews that are sufficiently similar that you can run stats on the results and call them "objective", which is exactly the thing we are all quite correctly complaining about. Which is perhaps anti-training. The problem is that the business side wants to reduce it to an objective checklist, but you can't do that because of Goodhart's Law [1]. AI is throwing this problem into focus because it is basically capable of passing any objective checklist, with just a bit of human driving [2]. Interviews can not consist of "I'm going to ask a question and if you give me the objectively correct answer you get a point and if you do not give the objectively correct answer you do not". The risk of hiring someone who could give the objectively correct answers but couldn't program their way out of a wet paper bag, let alone do requirements elicitation in collaboration with other humans or architecture or risk analysis or any of the many other things that a real engineering job consists of, was already pretty high before AI. But if interviewing is not a matter of saying the objectively correct things, a lot of people at all levels are just incapable of handling it after that. The Western philosophical mindset doesn't handle this sort of thing very well. [1]: https://en.wikipedia.org/wiki/Goodhart%27s_law [2]: Note this is not necessarily bad because "AI bad!", but, if all the human on the other end can offer me is that they can drive the AI, I don't need them. I can do it myself and/or hire any number of other such people. You need to bring something to the job other than the ability to drive an AI and you need to demonstrate whatever that is in the interview process. I can type what you tell me into a computer and then fail to comprehend the answer it gives is not a value-add. | ||||||||
▲ | bossyTeacher a day ago | parent [-] | |||||||
> The Western philosophical mindset doesn't handle this sort of thing very well. Mind elaborating on that? | ||||||||
|