| ▲ | tracker1 5 days ago |
| My biggest problem with leetcode type questions is that you can't ask clarifying questions. My mind just doesn't work like most do, and leetcode to some extent seems to rely on people memorizing leetcode type answers. On a few, there's enough context that I can relate real understanding of the problem to, such as the coin example in the article... for others I've seen there's not enough there for me to "get" the question/assignment. Because of this, I've just started rejecting outright leetcode/ai interview steps... I'll do homework, shared screen, 1:1, etc, but won't do the above. I tend to fail them about half the time. It only feels worse in instances, where I wouldn't even mind the studying on leetcode types sites if they actually had decent explainers for the questions and working answers when going through them. I know this kind of defeats the challenge aspect, but learning is about 10x harder without it. It's not a matter of skill, it's just my ability to take in certain types of problems doesn't work well. Without any chance of additional info/questions it's literally a setup to fail. edit: I'm mostly referring to the use of AI/Automated leetcode type questions as a pre-interview screening. If you haven't seen this type of thing, good for you. I've seen too much of it. I'm fine with relatively hard questions in an actual interview with a real, live person you can talk to and ask clarifying questions. |
|
| ▲ | samiv 5 days ago | parent | next [-] |
| The LC interviews are like testing people how fast they can run 100m after practice, while the real job is a slow arduous never ending jog with multiple detours and stops along the way. But yeah that's the game you have to play now if you want the top $$$ at one of the SMEGMA companies. I wrote (for example) my 2D game engine from scratch (3rd party libs excluded) https://github.com/ensisoft/detonator but would not be able to pass a LC type interview that requires multiple LC hard solutions and a couple of backflips on top. But that's fine, I've accepted that. |
| |
| ▲ | MarcelOlsz 4 days ago | parent | next [-] | | 5 years ago you'd have a project like that, talk to someone at a company for like 30m-1hr about it, and then get an offer. | | |
| ▲ | Voloskaya 4 days ago | parent | next [-] | | Did you mean to type 25? 5 years ago LC challenge were as, if not more, prevalent than they are today. And a single interview for a job is not something I have seen ever after 15 years in the space (and a bunch of successful OSS projects I can showcase). I actually have the feeling it’s not as hardcore as it used to be on average. E.g. OpenAI doesn’t have a straight up LC interview even though they probably are the most sought after company. Google and MS and others still do it, but it feel like it has less weight in the final feedback than it did before. Most en-vogue startup have also ditched it for real world coding excercices. Probably due to the fact that LC has been thoroughly gamed and is even less a useful signal than it was before. Of course some still do, like Anthropic were you have to have a perfect score to 4 leetcode questions, automatically judged with no human contact, the worst kind of interview. | | |
| ▲ | nsxwolf 4 days ago | parent | next [-] | | I literally got my first real job 26 years ago by talking about my game engine, for a fintech firm. | |
| ▲ | saagarjha 4 days ago | parent | prev | next [-] | | I don’t know if this has changed or perhaps was not representative but my entire loop at Anthropic involved people reviewing my code. | | |
| ▲ | Voloskaya 4 days ago | parent [-] | | Might depend on the specific position you applied to. Was it a pure SDE role or more on the research side ? | | |
| ▲ | saagarjha 2 days ago | parent [-] | | Basically pure software engineering | | |
| ▲ | Voloskaya 2 days ago | parent [-] | | Interesting, maybe they stopped doing this then. It used to be that you received a link for an automated online test, with 4 progressively harder questions, and you needed to score 1000/1000 to go the next step and speak to a human. |
|
|
| |
| ▲ | MarcelOlsz 4 days ago | parent | prev [-] | | There's an entire planet of jobs that have nothing to do with leetcode. I was talking about those, not FAANG stuff. Unfortunately I am not FAANG royalty. >Of course some still do, like Anthropic were you have to have a perfect score to 4 leetcode questions, automatically judged with no human contact, the worst kind of interview. Should be illegal honestly. | | |
| ▲ | camdenreslink 4 days ago | parent | next [-] | | 5 years ago non-FAANG companies were fully in leetcode mode for interviews. Maybe 10-15 years ago you could totally avoid it without much problem. | | |
| ▲ | pjmlp 4 days ago | parent [-] | | In most European companies that isn't a thing. Thankfully not everything from SV culture gets adoption. |
| |
| ▲ | aidenn0 4 days ago | parent | prev | next [-] | | It might be illegal; certainly if you can show that LC is biased against a protected class, then there would be grounds for a lawsuit. | | |
| ▲ | thaumasiotes 4 days ago | parent | next [-] | | > certainly if you can show that LC is biased against a protected class, then there would be grounds for a lawsuit. That wouldn't be hard to do. Given the disparate impact standard, everything is biased against a protected class. | |
| ▲ | eek2121 4 days ago | parent | prev [-] | | Only if there is enough evidence. Yes, I can say that the inability to account for things like the ADA in the US can place an employer in hot water, however, since LC doesn't make those decisions, they are immune. The accountability is placed upon the employer. Don't hate the players or the game. Maybe just figure out how to fix it without harming everyone, be popular enough to make said idea into law, and get into a position of power that allows you to do so. If that sounds hard, congrats, welcome to the reason why I never got into politics. Don't even get me started on all the people you will never realize you are hurting by fixing that one single problem. | | |
| ▲ | aidenn0 4 days ago | parent [-] | | I never meant to imply that LC would be violating the law. | | |
|
| |
| ▲ | almostgotcaught 4 days ago | parent | prev [-] | | > Should be illegal honestly. I can't imagine this kind of entitlement. If you don't want to work for them, don't study leetcode. If you want to work for them (and get paid tons of money), study leetcode. This isn't a difficult aristotelian ethics/morals question. | | |
| ▲ | MarcelOlsz 4 days ago | parent [-] | | I meant no human-in-the-loop wrt hiring, which is what I thought you were getting at. | | |
| ▲ | almostgotcaught 4 days ago | parent [-] | | It's the same exact thing - if some company makes you jump through hoops to get hired that you find distasteful just don't apply to company. | | |
| ▲ | antonvs 4 days ago | parent | next [-] | | Not all of us are market extremists. The “invisible hand of the market” doesn’t care about human rights. | | |
| ▲ | almostgotcaught 3 days ago | parent [-] | | I don't understand what you're saying. We're not talking about the market exploiting labor because before you are hired by the company you're not labor for the company. Is this really that difficult to understand? |
| |
| ▲ | MarcelOlsz 4 days ago | parent | prev | next [-] | | You don't know their interview process unless it's one of the big tech companies though. | |
| ▲ | matheusmoreira 4 days ago | parent | prev [-] | | No. Certain things just harm basic human dignity and should be outlawed. Judgement comes from our peers, not from machines. | | |
| ▲ | hn_go_brrrrr 4 days ago | parent [-] | | But sometimes also machines. ACLs are enforced by machines, and everyone is fine with that. |
|
|
|
|
|
| |
| ▲ | spike021 4 days ago | parent | prev | next [-] | | Not sure if that's a typo. 5 years ago was also pretty LC-heavy. Ten years ago it was more based on Cracking the Coding Interview. So i'd guess what you're referring to is even older than that. | | |
| ▲ | MarcelOlsz 4 days ago | parent [-] | | Talking about general jobs not FAANG adjacent. | | |
| ▲ | SJC_Hacker 4 days ago | parent | next [-] | | Nearly everyone is FAANG adjacent Apart from those companies where social capital counts for more ... | |
| ▲ | spike021 4 days ago | parent | prev [-] | | I rarely apply for or interview at FAANG or adjacent companies... |
|
| |
| ▲ | eek2121 4 days ago | parent | prev | next [-] | | I read this, and intentionally did not read the replies below. You are so wrong. You can write a library, even an entirely new language from scratch, and you will still be denied employment for that library/language. | |
| ▲ | lll-o-lll 4 days ago | parent | prev | next [-] | | > 5 years ago you'd have a project like that, talk to someone at a company for like 30m-1hr about it, and then get an offer. Based on my own experiences, that was true 25 years ago. 20 years ago, coding puzzles were now a standard part of interviewing, but it was pretty lightweight. 5 years ago (covid!) everything was leet-code to get to the interview stage. | |
| ▲ | lovich 4 days ago | parent | prev [-] | | I have been getting grilled on leet code style questions since the beginning my of my career over 12 years ago. The faangs jump and then the rest of the industry does some dogshit imitation of their process | | |
| ▲ | MarcelOlsz 4 days ago | parent [-] | | I'm lucky I'm in the frontend webdev sphere then I guess instead of like being a pure backend guy. I've had a couple of those live ones and just denied them. I did manage to implement a "snake" algorithm once but got denied because I wasn't able to talk about time/space complexity. | | |
| ▲ | lovich 4 days ago | parent [-] | | As someone who’s hired 10s of engineers across multiple companies, it’s bullshit on the hiring side too. It was humbling having to explain to fellow adult humans that when your test question is based on an algorithm solving a real business problem that we work on every day, a random person is not going to implement a solution in one hour as well as we can. I’ve seen how the faangs interview process accounts for those types of bias and mental blindness and are actually effective, but their solutions require time and/or money so everywhere I’ve been implements the first 80% that’s cheap and then skips on the rest that makes it work | | |
| ▲ | MarcelOlsz 4 days ago | parent [-] | | >As someone who’s hired 10s of engineers across multiple companies Any way to reach out? :) I think it boils down to companies not wanting to burn money and time on training, and trying to come up with all sorts of optimized (but ultimately contrived) interview processes. Now both parties are screwed. >It was humbling having to explain to fellow adult humans that when your test question is based on an algorithm solving a real business problem that we work on every day, a random person is not going to implement a solution in one hour as well as we can. Tell me about it! Who were you explaining this to? |
|
|
|
| |
| ▲ | roncesvalles 4 days ago | parent | prev | next [-] | | >The LC interviews are like testing people how fast they can run 100m after practice Ah, but, the road to becoming good at Leetcode/100m sprint is: >a slow arduous never ending jog with multiple detours and stops along the way Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago. Barring a few core library teams, companies don't really care if you're any good at algorithms. They care if you can learn something well enough to become world-class competitive. If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing. That's basically also the reason that many Law and Med programs don't care what your major in undergrad was, just that you had a very high GPA in whatever you studied. A decent number of Music majors become MDs, for example. | | |
| ▲ | ascorbic 4 days ago | parent | next [-] | | LC interviews were made popular by companies that were started by CS students because they like feeling that this stuff is important. They're also useful when you have massive numbers of applicants to sift through because they can be automated and are an objective-seeming way to discard loads of applicants. Startups that wanted to emulate FAANGs then cargo-culted them, particularly if they were also founded by CS students or ex-FAANG (which describes a lot of them). Very, very few of these actually try any other way of hiring and compare them. Being able to study hard and learn something well is certainly a great skill to have, but leetcode is a really poor one to choose. It's not a skill that you can acquire on the job, so it rules out anyone who doesn't have time to spend months studying something in their own time that's inherently not very useful. If they chose to test skills that are hard and take effort to learn, but are also relevant to the job, then they can also find people who are good at learning on the job, which is what they are actually looking for. | |
| ▲ | grugagag 4 days ago | parent | prev | next [-] | | But why stop there? Why not test candidates with problems they have never seen before? Or problems similar to the problems of the organization hiring? Leetcode mostly relies on memorizing patterns with a shallow understanding but shows the candidates have a gaming ability. Does that imply quality in any way? Some people argue that willing to study for leetcode shows some virtue. I very much disagree with that. | | |
| ▲ | roncesvalles 4 days ago | parent | next [-] | | I think you have a misunderstanding. Most companies that do LC-style interviews usually show unknown problems. Memorizing the Top 100 list from Leetcode only works for a few companies (notably and perplexingly, Meta) but doesn't for the vast majority. Also, just solving the problem isn't enough to perform well on the interview. Getting the optimal solution is just the table stakes. There's communication, tradeoffs between alternative solutions, coding style, follow-up questions, opportunities to show off language trivia etc. Memorizing problems is wholly not the point of Leetcode grinding at all. In terms of memorizing "patterns", in mathematics and computer science all new discovery is just a recombination of what was already known. There's virtually no information coming from outside the system like in, say, biology or physics. The whole field is just memorized patterns being recombined in different ways to solve different problems. | | |
| ▲ | grugagag 4 days ago | parent [-] | | It’s not about memorizing individual problems per se, but rather recognizing overall patterns and turning the process into a gameable endeavor. This can give candidates an edge, but it doesn’t necessarily demonstrate higher-level ability beyond surface familiarity with common patterns and the expectations around them. I’d understand the value if the job actually involved work similar to what's reflected in leetCode style problems, but in most cases, that couldn’t be further from reality. leetCode serves little purpose beyond measuring a candidate’s willingness to invest time and effort. That’s the only real virtue it rewards. But ultimately, I believe leetCode style interviews are measuring the wrong metric. | | |
| ▲ | roncesvalles 4 days ago | parent [-] | | >a candidate’s willingness to invest time and effort I guess it's a matter of opinion but my point is, this is probably the right metric. Arguably, the kind of people who shut up and play along with these stupid games because that's where the money is make better team players in large for-profit organizations than those who take a principled stance against ever touching Leetcode because their efforts wouldn't contribute anything to the art. | | |
| ▲ | grugagag 4 days ago | parent | next [-] | | Maybe yes maybe not, I'm leaning not but it's just an opinion. But as a company be careful what you wish for, these same candidates are often skilled at gaming systems and may leave your team as soon as they've extracted the benefits. They’re likely more interested in playing the game than in seriously solving real-world problems. | |
| ▲ | galaxyLogic 4 days ago | parent | prev [-] | | Then what if the test was how well you play chess? That takes time to study to become good. But would it be a good metric for hiring programmers? | | |
| ▲ | Jensson 4 days ago | parent | next [-] | | Because chess is more unrelated to the job? It is easy to see that LeetCode problems are closer to a programmers job than what chess is. But yeah, people used to ask that level of unrelated questions to programmers, and they were happy with the results. "Why are manhole covers round" etc. LeetCode style questions do produce better results than those, so that is why they use them. | | | |
| ▲ | 4 days ago | parent | prev [-] | | [deleted] |
|
|
|
| |
| ▲ | kentm 4 days ago | parent | prev | next [-] | | To play the devils advocate, being able to memorize patterns and recognize which patterns apply to a given problem is extremely valuable. Tons of software dev is knowing the subset of algorithms, data structures, and architecture that apply to a similar problem and being able to adapt it. | | |
| ▲ | tharkun__ 4 days ago | parent | next [-] | | It's funny you mention that. That's literally what CS teaches you too. Which is what "leetcode" questions are: fundamental CS problems that you'd learn about in a computer science curriculum. It's called "reducing" one problem to another. We had an entire semester's mandatory class spend a lot of time on reducing problems. Like figuring out how you can solve a new type of question/problem with an algorithm or two that you already know from before. Like showing that "this is just bin packing". And there are algorithms for that, which "suck" in the CS kind of sense but there are real world algorithms that are "good enough" to be usable to get shit done. Or showing that something "doesn't work, period" by showing that it can be reduced to the halting problem (assuming that nobody has solved that yet - oh and good luck btw. if you want to try ;) ) | | |
| ▲ | deepsun 4 days ago | parent [-] | | I did quite a bit of competitive programming in school, and pretty much all the world-class competitive problems are reduced to well-known algorithms. It's quite hard to come up with something new (not proven to be unsolvable for its constraints). I believe problem setters just try to disguise a known algorithm as much as possible. Then comes the ability/memorization to actually code it, e.g. if I knew it needs coding red-black tree I wouldn't even start. | | |
| |
| ▲ | 4 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | lupire 4 days ago | parent | prev [-] | | Algorithms and data structures are absolutely trivial for 99% of software dev work. 1% are inventing MapReduce. Architecture is not part of leetcode. |
| |
| ▲ | awesome_dude 4 days ago | parent | prev | next [-] | | > Or problems similar to the problems of the organization hiring? People complain, rightly so in some cases, that their "interview" is really doing some (unpaid) work for the company | |
| ▲ | didibus 4 days ago | parent | prev [-] | | > Leetcode mostly relies on memorizing patterns Math is like that as well though. It's about learning all the prior axioms, laws, knowing allowed simplifications, and so on. | | |
| ▲ | aeonik 4 days ago | parent | next [-] | | In the same way that writing and performing a new song is "just memorizing prior patterns and law" or that writing a new book is the same. I.e. it's not about that. Like sure it helps to have a base set of shared language, knowledge, and symbols, but math is so much more than just that. | | |
| ▲ | Jensson 4 days ago | parent [-] | | Programming competition problems are also much more than just memorizing patterns, that was the point of his post. |
| |
| ▲ | tomatocracy 3 days ago | parent | prev | next [-] | | I think you've missed a big part of maths - yes knowing those things is necessary. But then you also need to be able to see how a difficult or complex problem could be restated or broken down in a different way which lets you use those techniques. Sometimes this is something as trivial as using the right notation or coordinates, sometimes it's much more involved. | |
| ▲ | catlifeonmars 4 days ago | parent | prev [-] | | In math, you usually need to prove said simplifications. So just memorizing is not enough. As you get more advanced, you then start swapping out axioms. | | |
| ▲ | Jensson 4 days ago | parent [-] | | In programming the simplifications has to be correct even if you don't prove them, and being correct isn't that easy. | | |
| ▲ | catlifeonmars 4 days ago | parent [-] | | Pedantic: how do you know something is correct without proving it? How do you know you have covered all possible edge cases? /Pedantic In all seriousness, the intersection between correctness and project delivery is where engineering sits. Solutions must be good enough, correct enough, and cheap enough to fit the use case, but ideally no more than that. |
|
|
|
| |
| ▲ | Freedom2 4 days ago | parent | prev | next [-] | | > Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago. This is an appeal to tradition and a form of survivorship bias. Many successful companies have ditched LeetCode and have found other ways to effectively hire. > If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing. My company uses LeetCode. All I want is sane interfaces and good documentation. It is far more likely to get something clever, broken and poorly documented than something "excellent", so something is missing for this correlation. | |
| ▲ | Exoristos 4 days ago | parent | prev | next [-] | | > If it didn't actually work, it would've been discarded by companies long ago. This that I've singled out above is a very confident statement, considering that inertia in large companies is a byword at this point. Further, "work" could conceivably mean many things in this context, from "per se narrows our massive applicant pool" to "selects for factor X," X being clear only to certain management in certain sectors. Regardless, I agree with those who find it obvious that LC does not ensure a job fit for almost any real-world job. | |
| ▲ | saghm 4 days ago | parent | prev | next [-] | | > If it didn't actually work, it would've been discarded by companies long ago You're assuming that something else works better. Imagine if we were in a world where all interviewing techniques had a ton of false positives and negatives without a clear best choice. Do you expect that companies would just give up, and not hire at all, or would they pick based on other factors (e.g. minimizing the amount of effort needed on the company side to do the interviews)? Assuming you accept the premise that companies would still be trying to hire in that situation, how can you tell the difference between the world we're in now and that (maybe not-so) hypothetical one? | | |
| ▲ | roncesvalles 4 days ago | parent [-] | | I never made any claims about optimality. It works (for whatever reason) hence companies continue to use it If it didn't work, these companies wouldn't be able to function at all. It must be the case that it works better than running a RNG on everyone who applied. Does it mean some genius software engineer who wrote a fundamental part of the Linux kernel but never learned about Minimum Spanning Trees got filtered out? Probably. But it's okay. That guy would've been a pain in the ass anyway. |
| |
| ▲ | pjmlp 4 days ago | parent | prev | next [-] | | Does it work though? When I look at the messy Android code, Fuchsia's commercial failure, Dart being almost killed by politics, Go's marvellous design, WinUI/UWP catastrophical failure, how C++/CX got replaced with C++/WinRT, ongoing issues with macOS Tahoe,.... I am glad that apparently I am not good enough for such projects. | | |
| ▲ | chii 4 days ago | parent [-] | | zero of those failures are of a technical nature. The fact is that they fail is not evidence that leetcode interviews fails to select for high quality engineers. | | |
| ▲ | pjmlp 4 days ago | parent [-] | | On the contrary, they prove high quality engineers, for whatever measure that happens to be, does not correlate to product quality. |
|
| |
| ▲ | never_inline 4 days ago | parent | prev | next [-] | | > the road to becoming good In my experience, it's totally not true. Many college students of my generation are pretty good with LC hards these days purely due to FOMO-induced obsessive practice, which doesn't translate to a practical understanding of the job, (or any other parts of CS like OS/networks/languages/automata either). I will give you an exercise, pick an LC hard problem and it's very likely an experienced engineer who has only done "real work" will not know the "trick" required to solve the problem. (Unless it's something common like BFS or backtracking). I say this as someone with "knight" badge on leetcode, whatever that means, lest you think it's a sour grapes fallacy. | |
| ▲ | Calavar 4 days ago | parent | prev | next [-] | | > Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago. I see it differently. I wouldn't say it's reasonably good, I'd say it's a terrible metric that's very tenuously correlated with on the job success, but most of the other metrics for evaluating fresh grads are even worse. In the land of the blind the one eyed man is king. > If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing. Eh. As someone who did tech and then medicine, a lot great doctors would make terrible software engineers and vice versa. Some things, like work ethic and organization, are going to increase your odds of success at nearly any task, but there's plenty other skills that are not nearly as transferable. For example, being good at memorizing long lists of obscure facts is a great skill for a doctor, not so much for a software engineer. Strong spatial reasoning is helpful for a software developer specializing in algorithms, but largely useless for, say, an oncologist. | |
| ▲ | pendenthistory 4 days ago | parent | prev | next [-] | | It's also a filter for people who are ok with working hard on something completely pointless for many months in order to get a job. | |
| ▲ | AndrewDavis 4 days ago | parent | prev [-] | | > If it didn't actually work, it would've been discarded by companies long ago That makes the assumption that company hiring practices are evidence based. How many companies continue to use pseudo-science Myers Briggs style tests? |
| |
| ▲ | deadghost 4 days ago | parent | prev | next [-] | | >how fast they can run 100m after practice, while the real job is a slow arduous never ending jog with multiple detours and stops along the way I've always explained it as demonstrating your ping pong skills to get on the basketball team. | |
| ▲ | iyc_ 4 days ago | parent | prev | next [-] | | Mistakenly read this as you wrote that 2D game engine (which looks awesome btw) for a job interview to get the job: "I can't compete with this!!! HOW CAN I COMPETE WITH THESE TYPES OF SUBMISSIONS!?!?! OH GAWD!!!" | |
| ▲ | anonzzzies 4 days ago | parent | prev | next [-] | | Yes. If work was leetcode problem solving, I would actually enjoy it. Updating npm packages and writing tiny features that get canned a week later is all not that stimulating. | |
| ▲ | Figs 4 days ago | parent | prev | next [-] | | > SMEGMA companies Microsoft, Google, Meta, Amazon, I'm guessing... but, what are the other two? | | |
| ▲ | jiggawatts 4 days ago | parent | next [-] | | I prefer AGAMEMNON: Apple, Google, Amazon, Microsoft, Ebay, Meta, NVIDIA, OpenAI, Netflix | |
| ▲ | saghm 4 days ago | parent | prev | next [-] | | "Startups" and "Enterprise"? I guess that basically covers everything | |
| ▲ | esseph 4 days ago | parent | prev [-] | | Lol :) |
| |
| ▲ | RagnarD 4 days ago | parent | prev | next [-] | | "SMEGMA companies." :D | |
| ▲ | throwaway7783 4 days ago | parent | prev [-] | | And nowadays people are blatantly using AI to answer questions like this (https://www.finalroundai.com/coding-copilot). Even trying to stumble through design questions using AI |
|
|
| ▲ | MarcelOlsz 5 days ago | parent | prev | next [-] |
| 100%. I just went through an interview process where I absolutely killed the assignment (had the best one they'd seen), had positive signal/feedback from multiple engineers, CEO liked me a lot etc, only to get sunk by a CTO who thought it would be cool to give me a surprise live test because of "vibe coding paranoia". 11 weeks in the process, didn't get the role. Beyond fucking stupid. This was the demo/take-home (for https://monumental.co): https://github.com/rublev/monumental |
| |
| ▲ | johnfn 4 days ago | parent | next [-] | | It's funny because this repo really does seem vibe-coded. Obviously I have no reason not to believe you, but man! All those emojis in the install shell script - I've never seen anyone other than an AI do that :) Maybe you're the coder that the AI companies trained their AI on. Sorry about the job interview. That sucks. | | |
| ▲ | Tenemo 4 days ago | parent | next [-] | | There's even a rocket emoji in server console.logs... There are memes with ChatGPT and rocket emojis as a sign of AI use. The whole repo looks super vibe-coded, emojis, abundance of redundant comments, all in perfect English and grammar, and the readme also has that "chatty" feel to it. I'm not saying that using AI for take-home assignments is bad/unethical overall, but you need to be honest about it. If he was lying to them about not using any AI assistance to write all those emojis and folder structure map in the repo, then the CTO had a good nose and rightfully caught him. | | |
| ▲ | photonthug 4 days ago | parent | next [-] | | As a big believer in documentation and communication in general, there's this inevitable double-bind that people hate whatever you give them and also hate it if you give them nothing. LLMs have made this worse. No emojis and any effort to be comprehensive? Everyone complains "what is this wall of text", or "this is industry not grad school so cut it out with the fancy stuff" or "no one spends that much time on anything and it must be AI generated". (Frequently just a way of saying that they hate to read, and naively believe that even irreducibly complex stuff is actually simple). Stuff that's got emojis, a friendly casual tone and isn't information dense? Well that's very chatty and cute, it also has to be AI and can't be valuable. Since you can't win with docs, the best approach is to produce high quality diagrams that are simultaneously useful for a wide audience from novice to expert. The only problem is that even producing high quality diagrams at a ratio of 1 diagram per 1k lines of code is still very time consuming to produce if you're putting lots of thought into it, double so if you're fighting the diagramming tools, or if you want something that's easy for multiple stakeholders with potentially very different job descriptions to take in. Everyone will call it inadequate, ask why it took so long, and ask for the missing docs that they will hate anyway! On the bright side, LLMs are pretty great at generating mermaid, either from code, or natural language descriptions of data-flows. Diagrams-as-code without needing a whole application UI or one of a limited number of your orgs lucid-chart licenses is making "Don't like it? Submit a PR" a pretty small ask. Skin in the game helps to curbs endless bike-shedding criticism | | |
| ▲ | saghm 4 days ago | parent [-] | | > No emojis and any effort to be comprehensive? Everyone complains "what is this wall of text", or "this is industry not grad school so cut it out with the fancy stuff" or "no one spends that much time on anything and it must be AI generated". (Frequently just a way of saying that they hate to read, and naively believe that even irreducibly complex stuff is actually simple). > Stuff that's got emojis, a friendly casual tone and isn't information dense? Well that's very chatty and cute, it also has to be AI and can't be valuable. As a counterpoint, I can confidently say that I've never once had anyone give any feedback to me on the presence or absence of emojis in code I've written, whether for an interview, work, or personal projects, and I've never had anyone accuse my documentation of being AI generated or gotten feedback in an interview that my code didn't have enough documentation. There's a pretty wide spectrum between "indistinguishable from what I get when I give an LLM the same assignment as my interviewee" and "lacking any sort of human-readable documentation whatsoever". |
| |
| ▲ | userbinator 4 days ago | parent | prev | next [-] | | If you're using AI for an interview, you are basically telling them "you could just not bother with hiring me and use AI yourself" which is neither good for you nor them. | | |
| ▲ | lupire 4 days ago | parent [-] | | In 2025, everyone is hiring for people who can use AI to write software. They already are using AI themselves. They need more people who can. | | |
| ▲ | userbinator 3 days ago | parent [-] | | Not everyone. I know there are some employers who are extremely against any form of AI being used in making their products. |
|
| |
| ▲ | MarcelOlsz 4 days ago | parent | prev [-] | | Oh my god Becky, there's even a rocket emoji in the server console logs! Should I also be "honest" about tab-completion? Where do you draw the line? Maybe I should be punished for having an internet connection too. Using AI for docker/readme's/simple scaffolding I would have done anyways? Oh the horror! There was no lying because there was no discussion or mention of AI at all. Had they asked me, I'd have happily told them yes I obviously use AI to help me save time on grunt-work, I've been doing this stuff for like 15 years. It's an unpaid take-home assignment. You'd have to be smoking crack to think that I would be rawdogging this. Imagine if I had a family or a wife or an existing job? I'd dump them after getting linked their assignment document. Honestly at this point in the AI winter if you are a guy who has AI-inspired paranoia then I don't want to work for you because you are not "in the know". | | |
| ▲ | sarchertech 4 days ago | parent | next [-] | | You have that you’re the founder of an AI company in your hacker news profile, and your take home looks completely vibe coded. Why in the world are you surprised that a hiring manager is a little suspicious about your coding skills? Given what you’ve said in your other comments, it seems like you used AI in a way that I wouldn’t have a problem with but just briefly looking through I can see how it would look suspicious. | | |
| ▲ | MarcelOlsz 4 days ago | parent [-] | | That's all well and good. Totally ask me about AI, I can talk a lot about it. Don't however, make me go through 99% of the interview process up until the very last stage (spanning weeks), and throw a live test in my face, and then have the hiring manager clarify that it's about "vibe coding paranoia". It negates the entire reason I did the take-home assignment. |
| |
| ▲ | raincole 4 days ago | parent | prev | next [-] | | > It's an unpaid take-home assignment It's not defensible in any case. That being said, I think the CTO's "vide coding paranoid" after seeing this repo is 100% justified. | |
| ▲ | saghm 4 days ago | parent | prev | next [-] | | > Should I also be "honest" about tab-completion? Where do you draw the line? I'd probably draw it somewhere in the miles-long gap between tab completion and generating code with an LLM. It sounds like that's where the company drew it too. | |
| ▲ | 4 days ago | parent | prev [-] | | [deleted] |
|
| |
| ▲ | MarcelOlsz 4 days ago | parent | prev [-] | | I used AI for the Docker setup which I've already done before. I'm not wasting time on that. Yeah you can vibe code basic backend and frontend and whatnot, but you're not going to vibe code your way to a full inverse kinematics solution. I'm not a math/university educated guy so this was truly "from the ground up" for me despite the math being simple. I was quite proud of that. | | |
| ▲ | Tenemo 4 days ago | parent [-] | | So what was the issue the CTO had with vibe coding? Had you disclosed to then that you used LLMs for coding "basic" features outside the math and whatnot? | | |
| ▲ | postsantum 4 days ago | parent | next [-] | | CTO's previous job was at Palantir, perhaps he has some reasons to be paranoid | |
| ▲ | MarcelOlsz 4 days ago | parent | prev [-] | | The hiring manager told me that they were getting a lot of "signal to noise" ratio in terms of their hiring, where they'd bring someone on-site who had a good assignment and apparently more often than not, these candidates would shit the bed in a live environment. So the CTO made a live take-home assignment and didn't tell anyone. I was told that he did this to weed out the low signal-to-noise people they dealt with recently. >Had you disclosed to then that you used LLMs for coding "basic" features outside the math and whatnot? No it seems completely immaterial. I'll happily talk about it if asked but it's just another tool in the shed. Great for scaffolding but makes me want to rip my hair out more often than not. If it doesn't one-shot something simple for me it has no use because it's infuriating to use. I didn't get into programming because I liked writing English. |
|
|
| |
| ▲ | jonnycoder 4 days ago | parent | prev | next [-] | | Hah I feel you there. Around 2 years ago I did a take home assignment for a hiring manager (scientist) for Merck. The part B of the assignment was to decode binary data and there were 3 challenges: easy, medium and hard. I spent around 40 hours of time and during my second interview, the manager didn't like my answer about how I would design the UI so he quickly wished me luck and ended the call. The first interview went really well. For a couple of months, I kept asking the recruiter if anyone successfully solved the coding challenge and he said nobody did except me. Out of respect, I posted the challenge and the solution on my github after waiting one year. Part 2 is the challenging part; it's mostly a problem solving thing and less of a coding problem:
https://github.com/jonnycoder1/merck_coding_challenge | | |
| ▲ | lupire 4 days ago | parent | next [-] | | Enjoy the ultimate classic tour de force from world treasure
Chung-chieh (Ken) Shan’s wikiblog
"Proper Treatment" discussion / punchline
http://conway.rutgers.edu/~ccshan/wiki/blog/posts/WordNumber... Start of main content:
http://conway.rutgers.edu/~ccshan/wiki/blog/posts/WordNumber... | |
| ▲ | userbinator 4 days ago | parent | prev | next [-] | | Part 2 is the challenging part; it's mostly a problem solving thing and less of a coding problem That doesn't look too challenging for anyone who has experience in low-level programming, embedded systems, and reverse engineering. In fact for me it'd be far easier than part 1, as I've done plenty of work similar to the latter, but not the former. | |
| ▲ | MarcelOlsz 4 days ago | parent | prev [-] | | That sucks so hard man, very disrespectful. We should team up and start out own company. I tried checking out your repo but this stuff is several stops past my station lol. |
| |
| ▲ | _whiteCaps_ 4 days ago | parent | prev | next [-] | | A surprise live test is absolutely the wrong approach for validating whether someone's done the work. IMO the correct approach is to go through the existing code with the applicant and have them explain how it works. Someone who used AI to build it (or in the past had someone else build it for them) wouldn't be able to do a deep dive into the code. | | |
| ▲ | MarcelOlsz 4 days ago | parent [-] | | We did go into the assignment after I gently bowed out of the goofy live test. The CTO seemed uninterested & unfamiliar with it after returning from a 3 week vacation during the whole process. I waited. Was happy to run him through it all. Talked about how to extend this to a real-world scenario and all that, which I did fantastically well at. | | |
| ▲ | djmips 4 days ago | parent [-] | | I feel your pain. This isn't a question about AI or not. It's about if you can do the work and do it well. This kind of nonsense happened before AI. If you can't win the game of Jeapordy you don't get the job which has nothing to do with being a Jeapordy contestant! |
|
| |
| ▲ | tracker1 4 days ago | parent | prev | next [-] | | Damn... that's WAY more than I'll do for an interview process assignment... I usually time box myself to an hour or two max. I think the most I did was a tic-tac-toe engine but ran out of time before I could make a UI over it. | | |
| ▲ | MarcelOlsz 4 days ago | parent [-] | | I put absolutely every egg into that basket. The prospect of working in Europe (where I planned to return to eventually) working on cool robot stuff was enticing. The fucking CTO thought I vibe-coded it and dismissed me. Shout-out to the hiring manager though, he was real. |
| |
| ▲ | _whiteCaps_ 4 days ago | parent | prev | next [-] | | That is an insane amount of work for a job application. Were you compensated for it at all? | | |
| ▲ | Jensson 4 days ago | parent | next [-] | | It isn't impressive to spend a lot of time on a hiring problem, you shouldn't do that. If you can't do it in a few hours then just move on and apply for another job, you aren't the person they are looking for. Doing it slowly over many days is only taking your time and probably wont get you the job anyway since the solution will be a hard to read mess compared to someone who solves it quickly since they are familiar with the domain. | |
| ▲ | userbinator 4 days ago | parent | prev | next [-] | | The other comments here note that, and the author even stated it directly, that it was vibe-coded. | | | |
| ▲ | MarcelOlsz 4 days ago | parent | prev [-] | | No. Should I invoice them? I'm still livid about it. The kicker is the position pays a max of 60-120k euros, the maximum being what I made 5 years ago. | | |
| ▲ | zipy124 4 days ago | parent | next [-] | | TBF that's a pretty top tier salary for Europe. | | |
| ▲ | MarcelOlsz 4 days ago | parent [-] | | Right but we both know nobody is being offered the 120 right out the gate, so it's more like 100 max. |
| |
| ▲ | _whiteCaps_ 4 days ago | parent | prev [-] | | Probably too late now unfortunately. The job market is brutal right now, and you have my sympathy. I hope you can find a good fit soon. | | |
|
| |
| ▲ | fooker 4 days ago | parent | prev | next [-] | | This repo has enough red flags to warrant some suspicion. You have also not attempted to hide that, which is interesting. | |
| ▲ | samiv 5 days ago | parent | prev | next [-] | | Wait, what.. you did this as a take home for a position? Damn that looks excessive. | | |
| ▲ | MarcelOlsz 4 days ago | parent [-] | | Yes. I put a ton of work into it. I had about 60 pages worth of notes. On inverse kinematics, FABRIK, cyclic algorithms used in robotics, A*/RRT for real-world scenarios etc. I was super prepared. Talked to the CEO for about two hours. Took notes on all videos I can find of team members on youtube and their company. Luckily the hiring manager called me back and levelled with me, nobody kept him in the loop and he felt terrible about it. Some stupid contrived dumbed down version of this crane demo was used for the live test where I had to build some telemetry crap. Nerves took over, mind blanked. Here's the take-home assignment requirements btw: https://i.imgur.com/HGL5g8t.png. Here's the live assignment requirements: [1] https://i.imgur.com/aaiy7QR.png & [2] https://i.imgur.com/aaiy7QR.png. At this rate I'm probably going to starve to death before I get a job. Should I write a blog post about my last 2 years of experiences? They are comically bad. This was for monumental.co - found them in the HN who's hiring threads. | | |
| ▲ | suzzer99 4 days ago | parent | next [-] | | > Nerves took over, mind blanked. This never happened to me in a job interview before I turned 40. But once I knew I was too old to look the part, and therefore and had to knock it out of the park, mind blank came roaring in. I have so much empathy now for anyone it ever happened to when I was giving the a job interview. Performing under that kind of pressure has nothing to do with actual ability to do the job. | |
| ▲ | pedrosorio 4 days ago | parent | prev | next [-] | | > Here's the live assignment requirements: [1] https://i.imgur.com/aaiy7QR.png & [2] https://i.imgur.com/aaiy7QR.png. These are the same link | |
| ▲ | samiv 4 days ago | parent | prev | next [-] | | I feel bad for you, and I support you in naming and shaming this company. It's just horseshit to jerk people around like that. I hope you can at least leverage this demo. Maybe remove the identifications of it and shove it into your CV as a "hobby project"? It looks pretty good for that. Best! | | | |
| ▲ | dsff3f3f3f 4 days ago | parent | prev [-] | | Their hiring process seems absolutely absurd. | | |
| ▲ | MarcelOlsz 4 days ago | parent [-] | | They probably think they are geniuses who "weeded out another AI guy!" High fives all around! It was a great process (for me) right up until it wasn't. |
|
|
| |
| ▲ | tayo42 4 days ago | parent | prev [-] | | how much did this job pay? | | |
| ▲ | MarcelOlsz 4 days ago | parent [-] | | 60k-120k euros. The upper 20k probably being entirely inaccessible so in reality probably like 70-100k euros. | | |
| ▲ | tayo42 4 days ago | parent [-] | | It's always these low pay jobs that have the sloppiest interview experiences | | |
| ▲ | achenet 4 days ago | parent | next [-] | | In at least parts of Europe, 70k-100k is pretty good for a mid/senior developer. | | |
| ▲ | bowsamic 4 days ago | parent [-] | | It’s the market rate in my city in Germany (not Berlin not Munich). I pivoted from non CS academia and entered software at 73k |
| |
| ▲ | MarcelOlsz 4 days ago | parent | prev [-] | | I find it's less about the salary than it is the type of company. Any startup doing anything they consider remotely "cutting edge" is going to probably be a shit show. |
|
|
|
|
|
| ▲ | another_twist 5 days ago | parent | prev | next [-] |
| Its not really memorizing solutions. Yes you can get quite far by doing so but follow ups will trip people up. However if you have memorized it and can answer follow ups, I dont see a problem with Leetcode style problems. Problem solving is about pattern matching and the more patterns you know and can match against, the better your ability to solve problems. Its a learnable skill and better to pick it up now. Personally I've solved Leetcode style problems in interviews which I hadnt seen before and some of them were dynamic programming problems. These days its a highly learnable skill since GPT can solve many of the problems, while also coming up with very good explanations of the solution. Better to pick it up than not. |
| |
| ▲ | silisili 5 days ago | parent | next [-] | | It is and isn't. I'd argue it's not memorizing exact solutions(think copy paste) but memorizing fastest algos to accomplish X. And some people might say well, you should know that anyways. The problem for me is, and I'm not speaking for every company of course, you never really use a lot of this stuff in most run of the mill jobs. So of course you forget it, then have to study again pre interview. Problem solving is the best way to think of it, but it's awkward for me(and probably others) to spend minutes thinking, feeling pressured as someone just stares at you. And that's where memorizing the hows of typical problems helps. That said, I just stopped doing them altogether. I'd passed a few doing the 'memorizing' described above, only to start and realize it wasn't at all irrelevant to the work we were actually doing. In that way I guess it's a bit of a two way filter now. | | |
| ▲ | bluGill 5 days ago | parent | next [-] | | The only part of memorizing fastest algorithm the vast majority needs is whatever name that goes by in your library. Generic reusable code works very well in almost any language for algorithms. Even if you are an exception either you are writing the library meaning you write that algorithm once for the hundreds of other users, or the algorithm was written once (long ago) and you are just spending months with a profiler trying to squeeze out a few more CPU cycles of optimization. There are more algorithms than anyone can memorize that are not in your library, but either it is good enough to use a similar one that already is your library, or you will build it once and once again it works so you never go back to it. Which is to say memorizing how to implement an algorithm is a negative: it means you don't know how to write/use generic reusable code. This lack is costing your company hundreds of thousands of dollars. | |
| ▲ | Freedom2 5 days ago | parent | prev | next [-] | | "Fastest algos" very rarely solve actual business problems, which is what most of us are here to do. There's some specialized fields and industries where extreme optimization is required. Most of software engineer work is not that. | |
| ▲ | throwaway31131 4 days ago | parent | prev | next [-] | | I’d say it’s not even problem solving and it’s more pattern recognition. I actually love LC and have been doing a problem a week for years. Basically I give myself 30 minutes and see what I can do. It’s my equivalent to the Sunday crossword. After awhile the signals and patterns became obvious, to me anyway. I also love puzzlerush at chess.com. In chess puzzles there are patterns and themes. I can easily solve a 1600 rated problem in under 3 seconds for a chess position I’ve never seen before not because I solve the position by searching some move tree in my mind, I just recognize and apply the pattern. (It also makes it easier to trick the player when rushing but even the tricks have patterns :) That said, in our group we will definitely have one person ask the candidate a LC style question. It will probably be me asking and I usually just make it up on the spot based on the resume. I think it’s more fun when neither one of us know the answer. Algorithm development, especially on graphs, is a critical part of the job so it’s important to demonstrate competency there. Software engineering is a hugely diverse field now. Saying you’re a programmer is kinda like saying you’re an artist. It does give some information but you still don’t really know what skill set that person uses day to day. | |
| ▲ | gotts 4 days ago | parent | prev | next [-] | | > you never really use a lot of this stuff in most run of the mill jobs. So of course you forget it, then have to study again pre interview. I'm wondering how software devs explain this to themselves. What they train for vs what they actually do at their jobs differ more and more with time. And this constant cycle of forgetting and re-learning sounds like a nightmare. Perhaps people burn out not because of their jobs but the system they ended up in. | |
| ▲ | raincole 4 days ago | parent | prev [-] | | > memorizing fastest algos I don't think most LC problems require you to do that. Actually most of them I've seen only require basic concepts taught in Introduction to Algorithms like shortest path, dynamic programming, binary search, etc. I think the only reason LC problems stress people out is time limit. I've never seen a leetcode problem that requires you to know how to hand code an ever so slightly exotic algorithm / data structure like Fibonacci heap or Strassen matrix multiplication. The benefit of these "fastest algos" is too small to be measured by LC's automatic system anyway. Has that changed? My personal issue with LC is that it has a very narrow view of what "fast" programs look like, like most competitive programming problem sets. In real world fast programs are fast usually because we distribute the workload across machines, across GPU and CPU, have cache-friendly memory alignment or sometimes just design clever UI tricks that make slow parts less noticeable. |
| |
| ▲ | tracker1 5 days ago | parent | prev | next [-] | | I'm fine with that in an interview... I'm not fine with that, in a literally AI graded assignment where you cannot ask clarifying questions. In those cases, if you don't have a memorized answer a lot of times I cannot always grasp the question at hand. I've been at this for 30+ years now, I've built systems that handle millions of users and have a pretty good grasp at a lot of problem domains. I spent about a decade in aerospace/elearning and had to pick up new stuff and reason with it all the time. My issue is specifically with automated leetcode pre-interview screening, as well as the gamified sites themselves. | |
| ▲ | HarHarVeryFunny 4 days ago | parent | prev | next [-] | | I'd say that learning to solve tough LeetCode problems has very little (if not precisely zero) value in terms of you as a programmer learning to do something useful. You will extremely rarely need to solve these type of tougher select-the-most efficient-algorithm problems in most real-world S/W dev jobs, and nowadays if you do then just as AI. Of course you may need to pass an interview LeetCode test, in which case you may want to hold your nose and put in the grind to get good at them, but IMO it's really not saying anything good about the kind of company that thinks this is a good candidate filter (especially for more experienced ones), since you'd have to be stupid not to use AI if actually tasked with needing to solve something like this on the job. | | |
| ▲ | DrewADesign 4 days ago | parent [-] | | If a position needs low-level from-scratch code so performance-critical, and needs it so quickly that the developer must recall all of this stuff from memory, any candidate likely wouldn’t be asked to give a technical interview, let alone some gotcha test. |
| |
| ▲ | mcdeltat 4 days ago | parent | prev | next [-] | | There's probably a general positive correlation between knowing a lot of specific algorithms/techniques (i.e. as tested by LC) and being a great developer. HOWEVER I think the scenario of a real world job is far more a subset of that. Firstly these questions you get like 30 mins to do, which is small compared to the time variance introduced by knowing or not knowing the required algorithm. If you know it you'll be done in like 10 mins with a perfect answer. Whereas if you don't know you could easily spend 30 mins figuring it out and fail. So while on average people passed by LC may be good engineers, in any one scenario it's likely you reject a good engineer because the variance is large. And then it's easy to see why people get upset, because yeah it feels dodgy to be rejected when you happen to not know some obscure algorithm off the top of your head. The process could be fairer. Secondly, as many say, the actual job is rarely this technical stuff under such time pressure. Knowing algorithms or not means basically nothing when the job is like debugging CI errors for half your day. | |
| ▲ | pavlov 4 days ago | parent | prev | next [-] | | Ironic that you’re touting these puzzles as useful interviewing techniques while also admitting that ChatGPT can solve them just fine. If you’re hiring software engineers by asking them questions that are best answered by AI, you’re living in the past. | | |
| ▲ | another_twist 4 days ago | parent [-] | | That was because the parent complained about not having good write ups. You can use GPT which has already been trained on publicly available solutions to generate a very good explanation. Like a coaching buddy. Keeping in mind there are paid solutions that charge 15k USD for this type of thing, being able to upskill at just 20bucks a month is an absolute steal. |
| |
| ▲ | leptons 4 days ago | parent | prev | next [-] | | Been in software development for 30 years. I have no idea what "Leetcode" is. As far as I know I've never been interviewed with "Leetcode", and it seems like I should be happy about that. And when someone uses "leet" when talking about computing, I know that they aren't "elite" at all and it's generally a red flag for me. | |
| ▲ | wyager 5 days ago | parent | prev | next [-] | | Leetcode with no prep is a pretty decent coding skill test The problem is that it is too amenable to prep You can move your score like 2stddev with practice, which makes the test almost useless in many cases On good tests, your score doesn't change much with practice, so the system is less vulnerable to Goodharting and people don't waste/spend a bunch of time gaming it | | |
| ▲ | m000 4 days ago | parent | next [-] | | I think LC is used mostly as a metric of how much tolerance you have for BS and unpaid work: If you are willing to put unpaid time to prepare for something with realistically zero relevance with the day-to-day duties of the position, then you are ripe enough to be squeezed out. | | |
| ▲ | Zarathruster 4 days ago | parent | next [-] | | Cynical, but correct. I've long maintained that these trials, much like those we encounter in the school system, are only partially meant to test aptitude. Perhaps more importantly, they measure submissive compliance. | |
| ▲ | baq 4 days ago | parent | prev [-] | | It selects for age and childlessness. | | |
| ▲ | Jensson 4 days ago | parent [-] | | And experience selects for age as well, doesn't make it a bad signal. |
|
| |
| ▲ | stuxnet79 4 days ago | parent | prev [-] | | > On good tests, your score doesn't change much with practice, so the system is less vulnerable to Goodharting and people don't waste/spend a bunch of time gaming it This framing of the problem is deeply troubling to me. A good test is one that evaluates candidates on the tasks that they will do at the workplace and preferably connects those tasks to positive business outcomes. If a candidate's performance improves with practice, then so what? The only thing we should care about is that the interview performance reflects well on how the candidate will do within the company. Skill is not a univariate quantity that doesn't change with time. Also it's susceptible to other confounding variables which negatively impact performance. It doesn't matter if you hire the smartest devs. If the social environment and quality of management is poor, then the work performance will be poor as well. | | |
| ▲ | wyager 2 days ago | parent [-] | | > A good test is one that evaluates candidates on the tasks that they will do at the workplace Systematizing this is not feasible. The next best thing (in terms of predictive power for future job success) is direct IQ tests, which are illegal in the US. Next best thing after that are IQ proxies like coding puzzle ability. > If a candidate's performance improves with practice, then so what? It means the test isn't measuring anything useful. The extremely broad spectrum skills that benefit a software/eng role aren't something you can "practice". > The only thing we should care about is that the interview performance reflects well on how the candidate will do within the company. Agreed, which any Goodhartable test will never do. | | |
| ▲ | tptacek 2 days ago | parent [-] | | Direct IQ tests are not illegal in the US. Several very large companies deliver them to candidates; the most prominent company that administers general cognitive exams for employment purposes has a logo crawl on its front page with names your parents would recognize. If this persistent meme about them being forbidden was real, employment lawyers would be making bank off them. The real reason most companies don't do general IQ testing for candidates is that it's not an effective screen for aptitude. |
|
|
| |
| ▲ | giveita 4 days ago | parent | prev [-] | | Few people are in both circles of "can memorize answers" and "dont understand what they are doing". You would need "photographic" memory | | |
| ▲ | aeonik 4 days ago | parent [-] | | It's bizarre because I see the opposite. Most people memorize and cargo cult practices with no deeper understanding of what they are doing. |
|
|
|
| ▲ | eek2121 4 days ago | parent | prev | next [-] |
| leetcode just shows why interviews are broken. As a former senior dev (retired now, thanks to almost dying) I can tell you that the ability to write code is like 5% of the job. Every interview I've ever attended has wasted gazillions of dollars and has robbed the company of 10X that amount. Until companies can focus on things like problem solving, brainstorming, working as a team, etc. the situation won't improve. If I am wrong, why is it that the vast majority of my senior dev and dev management career involved the things I just mentioned? (I had to leave the field, sadly, due to disability) Oh and HR needs to stop using software to filter. Maybe ask for ID or something, however, the filters are flagging everyone and the software is sinking the ship, with you all with it. |
|
| ▲ | GuB-42 4 days ago | parent | prev | next [-] |
| > My biggest problem with leetcode type questions is that you can't ask clarifying questions. What is there to clarify? Leetcode-type questions are usually clear, much clearer than in real life projects. You know the exact format of the input, the output, the range for each value, and there are often examples in addition to the question. What is expected is clear: given the provided example inputs, give the provided example outputs, but generalized to cover all cases of the problem statement. The boilerplate is usually provided. One may argue that it is one of the reasons why leetcode-style questions are unrealistic, they too well specified compared to real life problems that are often incomplete or even wrong and require you to fill-in the gaps. Also, in real life, you may not always get to ask for clarification: "here, implement this", "but what about this part?", "I don't know, and the guy who knows won't be back before the deadline, do your best" The "coin" example is a simplification, the actual problem statement is likely more complete, but the author of the article probably felt these these details were not relevant to the article, though it would be for someone taking the test. |
|
| ▲ | Exoristos 4 days ago | parent | prev | next [-] |
| These interviews seem designed to filter out applicants with active jobs. In fact, I'd say that they seem specifically for selecting new CS graduates and H1B hires. |
|
| ▲ | lawlessone 5 days ago | parent | prev | next [-] |
| The one's i've gotten have all seemed more like tests of my puzzle solving skills than coding. The worst ones i've had though had extra problems though: one i was only told about when i joined the interview and that they would be watching live. One where they wanted me streaming my face the whole time (maybe some people people are fine with that) And one that would count it against me if i tabbed to another page. So no documentation because they assume i'm just googling it. Still it's mostly on me to prepare and expect this stuff now. |
| |
| ▲ | another_twist 5 days ago | parent [-] | | You can make up API calls which you can say you'd implement later. As long as these are not tricky blocks, you'll be fine. | | |
| ▲ | SJC_Hacker 4 days ago | parent [-] | | For Google, Facebook and Amazon, yes. At least last I interviewed there a few years ago. They're more interested in the data structure/algorithm But I have also been to places that demand actual working code which is compiled and is tested against cases Usually there the problem is simpler, so there's that |
|
|
|
| ▲ | godelski 4 days ago | parent | prev | next [-] |
| > you can't ask clarifying questions
Which isn't that the main skill actually being tested? How the candidate goes about solving problems? I mean if all we did was measure peoples' skills at making sweeping assumptions we'd likely end up with people who oversimplify problems and all of software would go to shit and get insanely complex... Is the hard part writing the lines of code or solving the problem? |
| |
| ▲ | wst_ 4 days ago | parent [-] | | Skill? LC is testing rote memorization of artificial problems you most likely never encounter in actual work. | | |
| ▲ | godelski 4 days ago | parent [-] | | You didn't read my comment correctly and should have and clarifying questions |
|
|
|
| ▲ | gopher_space 4 days ago | parent | prev | next [-] |
| > My biggest problem with leetcode type questions is that you can't ask clarifying questions. My mind just doesn't work like most do, and leetcode to some extent seems to rely on people memorizing leetcode type answers. On a few, there's enough context that I can relate real understanding of the problem to, such as the coin example in the article... for others I've seen there's not enough there for me to "get" the question/assignment. The issue is that leetcode is something you end up with after discovery + scientific method + time, but there's no space in the interview process for any of that. Your mind slides off leetcode problems because it reverses the actual on-the-job process and loses any context that'd give you a handle on the issue. |
|
| ▲ | strangattractor 5 days ago | parent | prev | next [-] |
| IMO leetcode has multiple problems. 1. People can be hired to take the test for you - surprise surprise
2. It is akin to deciding if someone can write a novel from reading a single sentence. |
| |
| ▲ | another_twist 5 days ago | parent [-] | | Hiring people for the test is only valid for online assessment. For an onsite, its very obvious if the candidates have cheated on the OA. I've been on the other side and its transparent. > It is akin to deciding if someone can write a novel from reading a single sentence. For most decent companies, the hiring process involves multiple rounds of these challenges along with system designs. So its like judging writing ability by having candidates actually write and come up with sample plots. Not a bad test. | | |
|
|
| ▲ | giveita 4 days ago | parent | prev | next [-] |
| Where I interviewed you had effectively 1 or 2 LC question but the interviewer offered clarifying questions making for a real time discussion and coding exercise. This solves one problem but it does add performance anxiety to the mix having to live code. |
|
| ▲ | garrettgarcia 5 days ago | parent | prev | next [-] |
| > My biggest problem with leetcode type questions is that you can't ask clarifying questions. Huh? Of course you can. If you're practicing on leetcode, there's a discussion thread for every question where you can ask questions till the cows come home. If you're in a job interview, ask the interviewer. It's supposed to be a conversation. > I wouldn't even mind the studying on leetcode types sites if they actually had decent explainers If you don't find the hundreds of free explanations for each question to be good enough, you can pay for Leetcode Pro and get access to editorial answers which explain everything. Or use ChatGPT for free. > It's not a matter of skill, it's just my ability to take in certain types of problems doesn't work well. I don't mean to be rude, but it is 100% a matter of skill. That's good news! It means if you put in the effort, you'll learn and improve, just like I did and just like thousands and thousands of other humans have. > Without any chance of additional info/questions it's literally a setup to fail. Well with that attitude you're guaranteed to fail! Put in the work and don't give up, and you'll succeed. |
| |
| ▲ | tracker1 5 days ago | parent | next [-] | | Last year, I saw a lot of places do effectively AI/Automated pre-inverview screenings with a leetcode web editor, and a video capture... This is what I'm talking about. I'm fine with hard questions in an actual interview. | |
| ▲ | another_twist 5 days ago | parent | prev | next [-] | | > My biggest problem with leetcode type questions is that you can't ask clarifying questions. Yeah this one confused me. Not asking clarifying questions is one of the sureshot ways of failing an interview. Kudos if the candidates ask something that the interviewers havent thought of, although its rare as most problems go through a vetting process (along with leak detection). | | | |
| ▲ | epolanski 5 days ago | parent | prev | next [-] | | Many interviews now involve automated exercises on websites that track your activity (don't think about triggering a focus change event on your browser, it gets reported). Also, the reviewer gets an AI report telling it whether you copied the solution somewhere (expressed as a % probability). You have few minutes and you're on your own. If you pass that abomination, maybe, you have in person ones. It's ridiculous what software engineers impose on their peers when hiring, ffs lawyers, surgeons, civil engineers get NO practical nor theorical test, none. | | |
| ▲ | dmoy 5 days ago | parent | next [-] | | The major difference between software devs and lawyers, surgeons, and civil engineers is that the latter three have fairly rigorous standards to pass to become a professional (bar, boards, and PE). That could exist for software too, but I'm not sure HN folks would like that alternative any better. Like if you thought memorizing leetcode questions for 2 weeks before an interview was bad, well I have some bad news. Maybe in 50-100 years software will have that, but things will look very different. | | |
| ▲ | epolanski 4 days ago | parent [-] | | You ain't interviewing your plumber or accountant come on and I have millions of other examples. | | |
| ▲ | dmoy 4 days ago | parent [-] | | Accountants have to sit for the CPA exams (four of them), and depending on the state may have required graduate course load. And also you should interview your CPA, because a lot are not very good at whatever specific section of accounting you need (e.g. tax filing). Plumber is probably the closest to what you're getting at. They are state licensed typically, with varying levels of requirement. But the requirement is often just like "have worked for 2-4 years as a trainee underneath a certified plumber" or whatever. That would be closest to what I'm guessing you would be recommending? Also relevantly: the accountant and plumber jobs that are paying $300k-$500k+ are very rare. There exist programming jobs that pay what a typical plumber makes, but don't have as many arcane interview hoops to jump through. |
|
| |
| ▲ | SAI_Peregrinus 4 days ago | parent | prev | next [-] | | At least in the US, lawyers, surgeons, & civil engineers all have accredited testing to even enter the profession, in the form of the bar exam, boards, and FE & PE tests respectively. So they do have such theoretical tests, but only when they want to gain their license to practice in a given state. Software doesn't have any such centralized testing accreditation, so we end up with a mess. | |
| ▲ | lukan 5 days ago | parent | prev [-] | | "don't think about triggering a focus change event on your browser, it gets reported)." So .. my approach would be to just open dev tools and deactivate that event. Show of practical skill or cheating? | | |
| ▲ | supriyo-biswas 4 days ago | parent [-] | | Switching to devtools also triggers a focus change and is detectable by other means (such as repeatedly invoking a debugger statement). | | |
| ▲ | lukan 4 days ago | parent [-] | | One can type in devtools withouth having the focus on dev tools, but indeed, to track down the event, one has to loose focus for a while. But after you find out what line of js is needed, then you can just inject that without dev tools with greasemonkey for instance. But probably a general solution exists ... and there are actually extensions that will do that in general. |
|
|
| |
| ▲ | ok123456 5 days ago | parent | prev [-] | | How does asking clarifying questions work when a non-programmer is tasked with performing the assessment, because their programmers are busy doing other things, or find it degrading and pointless? |
|
|
| ▲ | 4 days ago | parent | prev [-] |
| [deleted] |