| ▲ | naet a day ago |
| The author says "whiteboard tests" are broken, but it seems like they're arguing that online coding assessments are broken, not in person interviews using an actual whiteboard. Doing an in person interview on a whiteboard sidesteps the AI issue. As someone who's done a large number of remote interviews, there are some clear signs that some candidates try to cheat online tech interviews. I wonder if the trend will fuel more of returns to the office, or at least a return to in-person interviewing for more companies. |
|
| ▲ | scarface_74 a day ago | parent | next [-] |
| If your coding assessment can be done with AI and the code that the candidate is expected to write can’t be, doesn’t that by definition mean you are testing for the wrong thing during your coding interview? |
| |
| ▲ | theamk a day ago | parent | next [-] | | Absolutely. We've switched from (much simpler than hackerrank) coding tests to debugging problem. It relies on external API, so we get to see candidate's train of thought, and naive methods of cheating using ChatGPT are trivially detectable. But this is arms race of course. I have no doubt LLMs could solve that problem too (they might be able already with the right prompt). And then we'd have to make to even more realistic... How does one fit "here is a 1M lines codebase and a user complaint, fix the problem" in a format of 1-hour interview? We'll either have to solve this, or switch to in-person interviews and ban LLMs. | |
| ▲ | cute_boi a day ago | parent | prev [-] | | Every people says these, but what is the best objective way to know the candidate is good for position? Leetcode is still the best option imo. | | |
| ▲ | lesuorac a day ago | parent | next [-] | | Yeah its weird, because the whole point of having a system for hiring involving common questions, rubrics, etc is because at the end of the day you can either show that scoring well on the interview is correlated with higher end-of-year performance reviews or not show that and alter your interview system until it does. Like you guys can keep posting these articles that have 0 statistical rigor. It's not going to change a process that came about because it had statistical significance. Do remember, Google used to be known for asking questions like "How many piano tuners are in NYC". Those questions are gone not because somebody wrote a random article insulting them; they're going because somebody did actual math and showed they weren't effective. | | |
| ▲ | scarface_74 a day ago | parent [-] | | Yes, because of Google’s rigorous hiring process they have had many successful products outside of selling ads against search… I’ve done my stint in BigTech, most developers are not doing anything ground breaking |
| |
| ▲ | scarface_74 a day ago | parent | prev [-] | | Give them a real world simple use case where they have code and they have to fix real world code by making the unit test pass. Never in almost 30 years of coding have I had to invert a b-tree or do anything approaching what leetCode tests for. Well actually I did have to DS type code when writing low level cross platform C in the late 90s without a library. But how many people have to do that today? And how is leetCode testing the best way when all it tests for is someone’s ability to memorize patterns they don’t use in the real world? |
|
|
|
| ▲ | ipaddr a day ago | parent | prev | next [-] |
| Is using AI cheating when it's part of the job now. Is not using AI signalling inexperience in the llm department. |
| |
| ▲ | paxys a day ago | parent | next [-] | | Copy pasting code from ChatGPT doesn't mean you have any kind of understanding of LLMs. | |
| ▲ | finnthehuman a day ago | parent | prev | next [-] | | Yes, obviously. Cheating is subverting the testers intent and being dishonest about it. Not just what a lawyer can weasel word their way around. | | |
| ▲ | gopher_space a day ago | parent [-] | | It’s not dishonest, it’s just business. I’m under the exact same burden of truth as the company interviewing me; zilch. | | |
| ▲ | theamk a day ago | parent | next [-] | | Fair enough! In this case, it seems that policy "hard-fail any candidates who use cheat, using AI or otherwise" is working as expected. Interviews are supposed to be candidate's best showing. If that includes cheating, better fail them fast. | |
| ▲ | a day ago | parent | prev [-] | | [deleted] |
|
| |
| ▲ | chefandy a day ago | parent | prev | next [-] | | I wonder if OpenAI/Google/Microsoft, et al would hire a developer that leaned heavily on ChatGPT, etc to answer interview questions? Not that I expect them to have ethical consistency when there are much more important factors (profit) on the table, but after several years of their marketing pushing the idea that these are ‘just tools’ and the output was tantamount to anything manually created by the prompter, that looks pretty blatantly hypocritical if they didn’t. | |
| ▲ | zamalek a day ago | parent | prev | next [-] | | Amazon uses Hackerrank and explicitly says not to use LLMs. In that case it would be cheating. However, given that everyone is apparently using it, I now feel dumb for not doing so. | | |
| ▲ | deprecative a day ago | parent [-] | | They made tools to make us redundant and are upset we're forced to use those tools to be competitive. |
| |
| ▲ | surgical_fire a day ago | parent | prev | next [-] | | Depends on what kind of developer are you trying to hire, maybe. | |
| ▲ | ChrisMarshallNY a day ago | parent | prev | next [-] | | That's actually a valid question. It looks like it was an unpopular one. Personally, I despise these types of tests. In 25 years as a tech manager, I never gave one, and never made technical mistakes (but did make a number of personality ones -great technical acumen is worthless, if they collapse under pressure). But AI is going to be a ubiquitous tool, available to pretty much everyone, so testing for people that can use it, is quite valid. Results matter. But don't expect to have people on board that can operate without AI. That may be perfectly acceptable. The tech scene is so complex, these days, that not one of us can actually hold it all in our head. I freely admit to having powerful "google-fu," when it comes to looking up solutions to even very basic technical challenges, and I get excellent results. | |
| ▲ | gitremote a day ago | parent | prev | next [-] | | So now there are job applicants not only pretending to know DSA by using ChatGPT, but also claim they have "experience in the LLM department". It's not part of the job now, unless you're too inexperienced to estimate how long it takes to find subtle bugs. | |
| ▲ | IshKebab a day ago | parent | prev [-] | | It's cheating if you don't say you're using it. | | |
| ▲ | hmottestad a day ago | parent [-] | | At some point I assume that it’ll be so normal that you’ll almost have to say when you’re not using it. I don’t need to say that I’m using a text editor, instead of hole punched cards. It’s also quite common to use an IDE instead of a text editor these days in coding interviews. When I was a student I remember teachers saying that they considered an IDE as cheating since they wanted to test our ability to remember syntax and to keep a mental picture of our code in our heads. |
|
|
|
| ▲ | bsder a day ago | parent | prev [-] |
| > or at least a return to in-person interviewing for more companies. This has been broken for a while now, and companies still haven't reset to deal with it. The incentives to the contrary are too large. |
| |
| ▲ | unavoidable a day ago | parent [-] | | The disincentives are huge though. Hiring a bad employee is a very expensive problem and hard to get rid of. | | |
| ▲ | Yoric 16 hours ago | parent | next [-] | | In which country? In France, for instance, you have a (typically) 6 months long no questions asked window to fire a new hire, if they prove a bad employee. Presumably, if you haven't found out in 6 months, you wouldn't find out by changing the interviewing strategy. | |
| ▲ | ipaddr a day ago | parent | prev [-] | | Isn't it as simple as going on pip for fangs, a short conversation for a founder of a startup and a few weeks notice pay? | | |
| ▲ | paxys a day ago | parent | next [-] | | The process is anything but simple at large companies. Even if the new hire is a complete fraud and can barely write code it'll still take an average manager 6-12 months to be able to show them the door. And it'll involve countless meetings and a mountain of paperwork, all taking away time from regular work. And then it'll take another 6 months to get a replacement and onboard them. That means your team has lost over a year of productivity over a single bad hire. | |
| ▲ | viraptor a day ago | parent | prev | next [-] | | That comes after the decision that you can't fix the situation, which comes after you discovered that the hire was bad, which comes after a number of visible failures. That's a lot of wasted time/effort, even if the firing itself is simple. | |
| ▲ | jamesfinlayson a day ago | parent | prev | next [-] | | Depends on the country I think - in Australia at least it seems like you can sue for unfair dismissal if you're angry about being kicked out, so HR departments only seem to get rid of someone as a last resort. | |
| ▲ | gopher_space a day ago | parent | prev | next [-] | | The cost of hiring, firing, rehiring approximates the position’s yearly salary. | |
| ▲ | deprecative a day ago | parent | prev [-] | | In my area they just tell you to leave. No warning. No severance. Midwest US. |
|
|
|