Remix.run Logo
neversupervised 4 hours ago

It’s completely wild to me that lifelong programmers come into contact with agentic coding and come to the conclusion that their jobs are safe for one reason or another. AI will definitely be able to write entire software, inclusive of figuring out requirements and asking the right questions. It’s not that far already. Why is it that everyone looks at weaknesses of a technology that didn’t exist a couple years ago instead of appreciating the incredible rate of improvement? I know why, because it’s inconvenient to the narrative of what makes us valuable. But still, our job is to turn ideas into a sequence of logical steps. Why can’t we do the same when forecasting the impact of AI on our jobs?

sevenzero 4 hours ago | parent [-]

>...the incredible rate of improvement?

Because the "rate of improvement" is only astonishing in well understood areas and really only astonishing if you yourself are not that great at what you do. Speaking for myself here, my job is extremely safe given that my boss doesn't wanna sit there and prompt AI all day and i work in a fun little 4 person company. We already have plans for the 3 next years which involve me :-)

themgt 3 hours ago | parent [-]

Because the "rate of improvement" is only astonishing in well understood areas and really only astonishing if you yourself are not that great at what you do.

This is a bold vague claim many on HN make, but never put back-of-napkin numbers on. e.g. do you think agentic Opus 4.7/GPT 5.5 are 95th percentile coders but you're 98th percentile? Or are you saying you're a middle-of-the-road 60th percentile coder and AI is 20th percentile so only 20% worst programmers should worry? Let's be specific about the claim being made.