▲ | scarface_74 a day ago | ||||||||||||||||||||||
If your coding assessment can be done with AI and the code that the candidate is expected to write can’t be, doesn’t that by definition mean you are testing for the wrong thing during your coding interview? | |||||||||||||||||||||||
▲ | theamk a day ago | parent | next [-] | ||||||||||||||||||||||
Absolutely. We've switched from (much simpler than hackerrank) coding tests to debugging problem. It relies on external API, so we get to see candidate's train of thought, and naive methods of cheating using ChatGPT are trivially detectable. But this is arms race of course. I have no doubt LLMs could solve that problem too (they might be able already with the right prompt). And then we'd have to make to even more realistic... How does one fit "here is a 1M lines codebase and a user complaint, fix the problem" in a format of 1-hour interview? We'll either have to solve this, or switch to in-person interviews and ban LLMs. | |||||||||||||||||||||||
▲ | cute_boi a day ago | parent | prev [-] | ||||||||||||||||||||||
Every people says these, but what is the best objective way to know the candidate is good for position? Leetcode is still the best option imo. | |||||||||||||||||||||||
|