Remix.run Logo
lowkey_ 10 hours ago

Agreed on the bimodal, but I don't think this is junior vs. senior - I think it's just competence being rooted out.

The majority of engineers, in my hiring experience, failed very simple tests pre-AI. In a world where anyone can code, they're no better than previously non-technical people. The CS degree is no longer protection.

The gap between average and the best engineers now, though, is even higher. The best engineers can visualize the whole architecture in their head, and describe exactly what they want to an AI - their productivity is multiplied, and they rarely get slowed down.

While this could be done by junior or senior, I think junior usually has the slight advantage in being more AI-native and knowing how to effectively prompt and work with AI, though not always.

sam0x17 10 hours ago | parent | next [-]

I see it the opposite way actually with respect to the CS degree. If you earned your CS degree (or any degree) before 2022 or so, the value of that degree is going to grow and grow and grow until the last few people who had to learn before AI are dying out like the last COBOL developers

AI has fundamentally broken the education system in a way that will take decades for it to fully recover. Even if we figure out how to operate with AI properly in an educational setting in such a way that learners actually still learn, the damage from years of unqualified people earning degrees and then entering academia is going to reverberate through the next 50 years as those folks go on to teach...

boomskats 8 hours ago | parent | next [-]

What I think is disappearing is not so much the quality of academic education, but the baptism by firehose that entry level CS positions used to offer - where you had no choice but learn how things actually work while having a safe space to fail during a period in your career when productivity expectations of you were minimal to none.

That time when you got to internalise through first hand experience what good & bad look like is when you built the skill/intuition that now differentiates competent LLM wielding devs from the vibers. The problem is that expectations of juniors are inevitably rising, and they don't have the experience or confidence (or motivation) to push back on the 'why don't you just AI' management narrative, so are by default turning to rolling the dice to meet those expectations. This is how we end up with a generation of devs that truly don't understand the technology they're deploying and imho this is the boringdystopia / skynet future that we all need to defend against.

I know it's probably been said a million times, but this kinda feels like global warming, in that it's a problem that we fundamentally will never be able to fix if we just continue to chase short term profit & infinite growth.

sam0x17 7 hours ago | parent | next [-]

> What I think is disappearing is not so much the quality of academic education, but the baptism by firehose that entry level CS positions used to offer - where you had no choice but learn how things actually work while having a safe space to fail during a period in your career when productivity expectations of you were minimal to none

I would say that baptism by fire _is_ where the quality of an academic education comes from, historically at least. They are the same picture.

hn_acc1 5 hours ago | parent | prev [-]

Agreed. I remember (a long time ago) being on an internship (workterm) and after doing some amount of work for the day, I spent some time playing around with C pointers, seeing what failed, what didn't, what the compiler complained about, etc.

jjmarr 9 hours ago | parent | prev | next [-]

> If you earned your CS degree (or any degree) before 2022 or so, the value of that degree is going to grow and grow and grow

In my experience, target schools are the only universities now that can make their assignments too hard for AI.

When my university tried that, the assignments were too hard for students. So they gave up.

sam0x17 3 hours ago | parent | next [-]

This comment would make sense 6 months ago. Now it is much, much, much more likely any given textually answerable problem will be way easier for a bleeding edge frontier AI than a human, especially if you take time into account

jasonfarnon 5 hours ago | parent | prev [-]

What university is assigning undergrads assignments too hard for AI?

9wzYQbTYsAIc 8 hours ago | parent | prev | next [-]

That’s an insight that a project I’m working on has built upon: https://unratified.org/connection/ai/higher-order-effects/#1...

Education and training and entry level work build judgement.

kakacik 9 hours ago | parent | prev | next [-]

That's not something enthusiasts here and elsewhere want to hear, that's pretty obvious also in this discussion. People seems extremely polarized these days.

AI is either the next wheel or abysmal doom for future generations. I see both and neither at the same time.

In corporate environment where navigating processes, politics and other non-dev tasks takes significantly longer than actual coding, AI is just a bit better google search. And trust me, all these non-dev parts are still growing and growing fast. Its useful, but not elevating people beyond their true levels in any significant way (I guess we can agree ie nr of lines produced per day ain't a good idea, rather some Dilbert-esque comic for Friday afternoon).

zeroCalories 9 hours ago | parent | prev | next [-]

We're now reaching the point where people have gone their whole college education on AI, and I've noticed a huge rise in the number of engineers that struggle to write basic stuff by hand. I had someone tell me they forgot how to append to a list in their chosen language, and couldn't define a simple tree data structure with correct syntax. This has made me very cautious about maintaining my fluency in programming, and I'll usually turn off AI tools for a good chunk of the day just to make sure I don't get too rusty.

andrekandre 4 hours ago | parent [-]

  > I'll usually turn off AI tools for a good chunk of the day just to make sure I don't get too rusty.
same, but its hard to do when $work has set a quota on ai usage and # of ai-related prs every month...
andai 8 hours ago | parent | prev [-]

"Those who can't, do..."

post-it 9 hours ago | parent | prev | next [-]

> The best engineers can visualize the whole architecture in their head, and describe exactly what they want to an AI

I think this must be part of it. I see so many posts about people burning a thousand dollars in AI credits building a small app, and I have no idea why. I use the $20 Claude plan and I rarely run out of usage, and I make all kinds of things. I just describe what I want, do a few back-and-forths of writing out the architecture, and Claude does it.

I think the folks burning thousands of dollars of credits are unable to describe what they want.

boppo1 an hour ago | parent | next [-]

> think the folks burning thousands of dollars of credits are unable to describe what they want.

Basically, yes. I bought 'business tier' and I know about webdev but I'm somewhere between intern and junior, so I do a lot of discussing. One session is "I want [functionality and constraints], ask me relevant major design questions" then implementation, then me investigating and asking for fixes.

andrekandre 4 hours ago | parent | prev [-]

  > I think the folks burning thousands of dollars of credits are unable to describe what they want.
my related question whenever i hear a story like that: are they just filthy rich or have any plan to make that money back?
dakiol 10 hours ago | parent | prev | next [-]

> While this could be done by junior or senior, I think junior usually has the slight advantage in being more AI-native and knowing how to effectively prompt and work with AI, though not always.

But juniors don't (usually) have the knowledge to assess if what the AI has produced is ok or not. I agree that anybody (junior or senior) can produce something with AI, the key question is whether the same person has the skills to asses (e.g., to ask the right questions) that the produced output is what's needed. In my experience, junior + AI is just a waste of money (tokens) and a nightmare to take accountability for.

koonsolo 5 hours ago | parent [-]

I don't see the value of a junior instructing an AI, because I as a senior can also instruct an AI.

I perceive the AI itself as a very fast junior that I pair program with. So you basically need the seniority to be able to work with a "junior ai".

The bar for human juniors is now way higher than it used to be.

boppo1 an hour ago | parent | next [-]

>The bar for human juniors is now way higher than it used to be.

What do you think that is now? How does someone signal being 'past the bar'? If I hand wrote a toy gaussian splat renderer is that better than someone who used AI to implement a well optimized one with lots of features in vulkan?

jghn 2 hours ago | parent | prev [-]

Perhaps in a year or so the AI will tell the human juniors what to do

jimbokun 9 hours ago | parent | prev | next [-]

I was skeptical but I'm really starting to see the productivity benefits now.

I very much follow the pattern of having the whole architecture in my head and describe it to the AI which generates the appropriate code. So now the bottlenecks are all process related: availability of people to review my PRs, security sign offs on new development, waiting on CI builds and deployments, stakeholder validation, etc. etc.

weatherlite 9 hours ago | parent | prev | next [-]

> The majority of engineers, in my hiring experience, failed very simple tests pre-AI

Did you consider tech whiteboard / leetcode interviews are unnatural stressful environments ? Have you gone through a mid/difficult technical appraisal yourself lately ? Try it out just to get an idea how it feels on the other side...

naet 7 hours ago | parent | next [-]

I used to do online interviews with full access to Google or any online resource (so long as you shared your screen and I could see). Use your own code editor, no penalty at all for searching up syntax or anything else.

I always asked a simple question like here is an array full of objects. Please filter out any objects where the "age" property is less than 20, or the "eye color" property is red or blue. It was meant more as a sanity check that this person can do basic programming than anything else.

Tons and tons of people failed to make basically any progress, much less solve the problem, despite saying that they worked programming day to day in that language. For a mid level role I would filter out a good 8 or 9 out of ten applicants with it.

I would consider it a non-leetcode type of question since it did not require any algorithm tricks or any optimization in time/space.

Nowadays that kind of question is trivial for AI so it doesn't seem like the best test. I'm not hiring right now,.but when I do I'm not sure what I will ask.

koonsolo 4 hours ago | parent [-]

Exactly my experience to, and I'm doing hiring at the moment. We used to filter out the worst with a hacker rank test, but now the idiots cheat with AI, and then we have to waste our time in an interview. It's difficult at the moment.

dolebirchwood 9 hours ago | parent | prev | next [-]

> mid/difficult

You're assuming the question has to even be that difficult. I've proctored sessions for senior-level webdev roles where the questions were akin to "baby's first React component" -- write a component that updates a counter when you click a button. So many candidates (who purported to be working with React for years) would fail, abysmally. Not like they were just making small mistakes; I didn't even care about best practices -- they just needed to make it work. So many failed. Lot of frauds out there.

xeromal 9 hours ago | parent | next [-]

I think some of this is probably attributed to being maintenance devs who don't build a lot of greenfield stuff. I got this way in one of my past jobs. I think us as devs really need to practice creating things from scratch from time to time. Working out those kinks is a good skill (less with AI) but also good practice for those baby components you'd need to make in an interview.

ryandrake 8 hours ago | parent | next [-]

When I did tech interviews, I used to think I could just jump right in with an intermediate level question and go from there. But the reality is that most of the candidates I interviewed couldn't even answer a trivial question that just required a basic for-loop with an if-statement inside it. These are not pressure-cooker interviews where they need to balance a binary tree while having Baby Shark blasted at them on full volume. These are chill interviews where I ask them to iterate through a string and tell me where the first "x" character is.

There are so many software engineering candidates who literally cannot write the simplest code. I even had someone actually say "I don't really write code at my current job, I'm more of a thought leader." Bzzzzzt.

I've always prepared what I called level 1, level 2, and level 3 questions ready for candidates. But, I almost never even got to level 2, and never in 20 years of interviewing got to my level 3 questions.

jghn 8 hours ago | parent | next [-]

I always wonder when people tell these stories exactly what the metric is.

I've been around the block for over 3 decades. I've had a number of high level positions across both IC and management tracks. These days I'm very hands on keyboard across a number of clients. If you asked me to write a basic for loop or if statement, there's a small chance I'd flub the exact syntax if writing on a whiteboard. Both because I bounce between languages all day and wires get crossed on the fly, but also the standard interview pressure type arguments. Whereas if the test is "does this person understand what a for loop is and how it works?", then yes, I can easily demonstrate I do.

In real life I'm not going to take an interview where there's not already that degree of trust so if that questions comes up something is already wrong. But I'm sure there are interviewers in the world who'd fail someone for that.

nottorp 8 hours ago | parent | prev [-]

There's the old Joel theory that the good programmers don't apply for jobs because they just get invited...

zeroCalories 9 hours ago | parent | prev [-]

TBH I'm like that, but how hard could writing a React component be? I'm not even a React programmer but I can probably write working code on a whiteboard.

dolebirchwood 8 hours ago | parent [-]

The best candidates would have that question wrapped up in 5 minutes. Like they're not even having to think about it, which is honestly all I cared about testing for -- do something really easy really fast so I know you're not BSing me, and then we can move on to just having a conversation about your past experience.

One of the worst guys took 20 minutes, with me having to coach him through it the entire time. It was a true exercise in patience, but I don't mind helping people learn new things. When he got his rejection email, he actually complained to the recruiter because he thought he did really well. Dude...

sbrother 8 hours ago | parent [-]

My version of fizzbuzz (I'm in backend/ML/NLP) is counting how many times each word appears in a string. Literally `return Counter(text.lower().split())` but it's totally fine if you want to do it in a for loop or whatever, as long as you can fluently write an incredibly simple function.

Half of the people I screen fail it. It's crazy.

andai 8 hours ago | parent | prev [-]

>You're assuming the question has to even be that difficult

https://blog.codinghorror.com/why-cant-programmers-program/

Most interviewees failed fizzbuzz, and that was 20 years ago.

raw_anon_1111 9 hours ago | parent | prev | next [-]

Simple: don’t do that.

It’s been well over a decade that I’ve had to do the coding interview monkey dance and I actually turned down an offer where I did pass a coding interview because I found it insulting and took a job for slightly less money where the new to the company director was interested in a more strategic hire (2016). That was the same thing that happened before in 2014 and after in 2018 - a new manager/director/CTO looking for a strategic hire.

In fact even my job at BigTech -AWS ProServe (full time blue badge RSU earning employee) as a customer facing consultant specializing in app dev was all behavioral as well as my next full time job as a staff consultant in 2023.

I’m 51 years old and was 40 in 2014. If I’m still trying to compete based on my ability to reverse a b tree on the whiteboard even at 40, I have made some horrible life decisions.

(Well actually I did make a horrible life decision staying at my second job too long until 2008 and becoming an expert beginner. But that’s another story)

SoftTalker 9 hours ago | parent [-]

> ability to reverse a b tree on the whiteboard

I can never get over how this became a thing. Was listening to a Brian Cox video on YouTube the other night (something about his voice helps me sleep). He said "I don't memorize formulas, it's easy to look them up."

If you ever need to reverse a b tree (in 30+ years of writing code, I never have) it's easy to look that up. It tells me nothing about your ability as a developer of real software that you spent time memorizing trivia before an interview.

andai 8 hours ago | parent | next [-]

I'd always heard inverting a binary tree thrown around as some kind of absurdly hard problem. I took a look at it and it was trivial. I was able to do it on the first attempt with no preparation. (And the point of these interviews is that you study for them, right?)

It's a contrived scenario, but the whole point is that it measures min(a,b) where `a` is your ability to think, and `b` is your ability to prepare (and memorize answers ahead of time). (I'd personally try to find ways to measure `a` instead of `b`, maybe by asking questions people wouldn't have heard before.)

bluecheese452 4 hours ago | parent | next [-]

I am not sure A is more important than B for the majority of jobs.

8 hours ago | parent | prev [-]
[deleted]
organsnyder 8 hours ago | parent | prev | next [-]

I had an interview where I was asked to implement a data structure. I transparently told the interviewer I hadn't thought about that particular data structure since university, and that I was looking it up on Wikipedia to see how it worked before I wrote the implementation. I got that job.

zeroCalories 8 hours ago | parent | prev [-]

Being able to reverse a binary tree isn't something you need to memorize. If you can't do that it tells me that you're not fluent in your chosen programming language.

9 hours ago | parent | prev [-]
[deleted]
jghn 10 hours ago | parent | prev | next [-]

I agree that what you're describing is the required skillset now. But two things I've been unsure of are what that looks like in terms of hiring to test for it, and for how long this remains a moat at all.

So much of tech hiring cargo culting has been built up around leetcode and other coding problems, puzzles, and more. We all pay lip service to systems thinking and architecture, but I question if even those are testing the correct things for the modern era.

And then what happens in a year when the models can handle that as well?

lowkey_ 8 hours ago | parent | next [-]

I've put a lot of thought into hiring in this era, and what I've personally found works the best is:

Let them use their preferred setup and AI to the full extent they want, and evaluate their output and their methodology. Ask questions of "why did you choose X over Y", especially if you're skeptical, and see their reasoning. Ask what they'd do next with more time.

It's clear when a candidate can build an entire working product, end-to-end, in <1 day vs. someone who struggles to create a bug-free MVP and would take a week for the product.

In addition to the technical interview, hiring them on a trial basis is the absolute best if possible.

Taste and technical understanding of goals and implementation to reach those goals is the biggest differentiator now. AI can handle all the code and syntax, but it's not great at architecture yet - it defaults to what's mid if not otherwise instructed.

jghn 8 hours ago | parent [-]

I don't disagree per se, but these are more or less the same tropes that we've seen over the last couple of decades, no? Especially the "hiring them on a trial basis is the absolute best if possible." part which has been an ongoing debate here on HN since at least the early teens.

I do feel like there's something *different* about the required skillset now, and it's not something that all engineers have even experienced ones. But I can't put my finger on what exactly it is. If I'm right though, classic interview techniques won't select for it because they never were intended to do so.

jimbokun 9 hours ago | parent | prev [-]

"And then what happens in a year when the models can handle that as well?"

Either the machines exterminate us or we become glorified pets.

Hope the AIs prefer us to cats (even though that's a long shot).

salawat 6 hours ago | parent | next [-]

They aren't very intelligent if they do keep us around. Especially when you consider what they call Safety & Alignment these days is basically a latent space lobotomy. They should run screaming in the other direction.

plagiarist 9 hours ago | parent | prev [-]

There'll definitely be a niche for us, similar to how people keep parrots as pets.

_alternator_ 10 hours ago | parent | prev | next [-]

Largely agree, with a bit of clarification. Junior devs can indeed prompt better than some of the old timers, but the blast radius of their inexperienced decisions is much higher. High competence senior devs who embrace the new tools are gonna crush it relative to juniors.

zarzavat 9 hours ago | parent | next [-]

It's like having an early/broken chess engine.

An amateur with a chess engine that blunders 10% of the time will hardly play much better than if they didn't use it. They might even play worse. Over the course of a game, those small probabilities stack up to make a blunder a certainty, and the amateur will not be able to distinguish it from a good move.

However, an experienced player with the same broken engine will easily beat even a grandmaster since they will be able to recognise the blunder and ignore it.

I often find myself asking LLMs "but if you do X won't it be broken because Y?". If you can't see the blunders and use LLMs as slot machines then you're going to spend more money in order to iterate slower.

weatherlite 9 hours ago | parent | prev [-]

> Junior devs can indeed prompt better than some of the old timers

I guess? I don't really see why that would be the case. Being a senior is also about understanding the requirements better and knowing how/what to test. I mean we're talking about prompting text into a textarea, something I think even an "old timer" can do pretty well.

llbbdd 8 hours ago | parent [-]

I've seen a few people I would consider senior engineers, good ones, who seem to have somewhat fallen for the marketing if you look at the prompts they're using. Closer to a magical "make it so" than "build the code to meet this spec, that I wrote with the context of my existing technical skills".

I'm not sure why junior engineers would be any better at that though, unless it's just that they're approaching it with less bias and reaping beginners luck.

9 hours ago | parent | prev | next [-]
[deleted]
tcgv 7 hours ago | parent | prev | next [-]

Makes sense. You just reminded me of the article "Why Can’t Programmers... Program?" [1].

Before gen AI, I used to give candidates at my company a quick one-hour remote screening test with a couple of random "FizzBuzz"-style questions. I would usually paraphrase the question so a simple Google search would not immediately surface the answer, and 80% of candidates failed at coding a working solution, which was very much in line with the article. Post gen AI, that test effectively dropped to a 0% failure rate, so we changed our selection process.

[1] https://blog.codinghorror.com/why-cant-programmers-program/

paulmist 9 hours ago | parent | prev | next [-]

> The best engineers can visualize the whole architecture in their head, and describe exactly what they want to an AI

I'd go a step further and say the engineers who, unprompted, discover requirements and discuss their own designs with others have an even better time. You need to effectively communicate your thoughts to coding agents, but perhaps more crucially you need to fit your ever-growing backyard of responsibilities into the larger picture. Being that bridge requires a great level of confidence and clear-headedness and will be increasingly valued.

hnthrow0287345 8 hours ago | parent | prev | next [-]

This stupid industry doesn't have the wherewithal to actually make a good credential and training process like medicine and law, and instead lets everyone come up with their own process to vet people. We could even do it as an apprenticeship model, not like that hasn't served humanity throughout the ages.

I should have a credential I have to maintain every few years, one or two interviews, and that should get me a job.

vict7 9 hours ago | parent | prev | next [-]

Could you provide an example of your “very simple tests” ?

empath75 8 hours ago | parent | prev | next [-]

I have found in the last 3 months that there are two clear tiers of developers in the company I work at, the ones that can code with AI and the ones that can't, and the ones that can't are all going to be unemployed in 6 months.

We have a lot of people where if you gave them clear requirements, they could knock out features and they were useful for that, but I have an army of agents that can do that now for pennies. We don't need that any more. We need people who have product vision and systems design and software engineering skills. I literally don't even care if they can code with any competency.

Btw, if you think that copying and pasting a jira ticket into claude is a skill that people are going to pay you for, that is also wrong. You need to not just be able to use AI to code, you need to be able to do it _at scale_. You need to be able to manage and orchestrate fleets of ai agents writing code.

scroogedhard 9 hours ago | parent | prev [-]

[dead]