▲ | snickell a day ago | |
TL;DR: best case LLMs break algo-whiteboarding interviews so visibly we stop using them (they're already invisibly broken). Some engineers are genuinely better at using ChatGPT and friends to solve problems, and that's a skill worth including in evaluation of potential software engineers. I would advocate interviewers go with the flow, allow use of LLMs, but have candidates do it on the "shared screen" to see how they think - just like using a web browser in interviews. I've been a professional programmer since 25 BAI (25 years before AI). I see junior engineers who use it disastrously to create piles of messed up spaghetti code, and junior engineers who use it skillfully and judiciously to create good code and solve problems far above their years of experience. Both of them will use LLMs in their daily work. One of them will destroy the codebase, and the other will far exceed the expectations we had for a "2 yoe eng" only 2 years ago. I want to identify the latter, and filter out the ones that aren't good with ChatGPT and will use it to make messes. I want to identify experienced Senior Engineers who kick ass. They can do it with or without ChatGPT, I don't care which, and I don't want an interview format that doesn't let those who use ChatGPT show their true strength. I simply don't care how well you can recreate (and pretend to have rediscovered lol) PhD thesis algorithms with no internet access on memory alone: that is not a relevant skill in 2024. I won't cry if "allow LLMs but watch how they use it / think" doesn't scale to "no skin in the game" offline tests where interviewers try to get candidates to spend an hour without the company matching that hour with an hour of employee/interviewer time. In fact, if (and it probably will) this kills offline testing, I think that's a win for labour. What ChatGPT "cheating" reveals, imo, is how weak and artificial "whiteboarding" interview questions are: they claim to be problem solving tests but they're actually algorithm memorization tests. Interviewing without allowing use of LLMs is like requiring "no web browsing during interview": it artificially biases toward test-specific skills and away from real world strengths. If an LLM can straight up solve the problem, its not a very good match for skills required in 2024: its now a problem that SHOULD be solved via LLM. It turns out that LLMs are better at rote memorization than we are. Good! The "whiteboarding algos" interview technique was already a bad match to real world SWE skills except in relatively rare job roles, and its a good thing that its artificiality caused it now be irreparably broken by ChatGPT. Bye bye, won't miss you ;-) Most bog-standard software engineering work is boring things like good variable naming, effective use (and non-overuse) of encapsulation, designing a schema that won't make you hate life in 5 years, wise selection of dependencies, etc etc. Testing live with a candidate, watching them use ChatGPT, seeing how they think, this is what gets you a job-relevant-skill signal. |