Remix.run Logo
pfisherman 4 hours ago

This is going to catch some heat, but what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents?

I saw something similar in ML when neural nets came around. The whole “stack moar layerz” thing is a meme, but it was a real sentiment about newer entrants into the field not learning anything about ML theory or best practices. As it turns out, neural nets “won” and using them effectively required development and acquisition of some new domain knowledge and best practices. And the kids are ok. The people who scoffed at neural nets and never got up to speed not so much.

Edit: as an aside, I have learned plenty from reviewing coding agent generated implementations of various algorithms or methods.

MetaWhirledPeas 3 hours ago | parent | next [-]

> what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents?

Well, it's not. There's a small moat around that right now because the UX is still being ironed out, but in a short while able to use coding agents will be the new able to use Excel.

What will remain are the things that already differentiate a good developer from a bad one:

- Able to review the output of coding agents

- Able to guide the architecture of an application

- Able to guide the architecture of a system

- Able to minimize vulnerabilities

- Able to ensure test quality

- Able to interpret business needs

- Able to communicate with stakeholders

rkapsoro 3 hours ago | parent | next [-]

I think you're agreeing with him. All of the things you just listed are key senior developer skills.

SoftTalker an hour ago | parent | prev | next [-]

None of those things will be necessary if progress continues as it has. The AI will do all of that. In fact it will generate software that uses already proven architectures (instead of inventing new ones for every project as human developers like to do). The testing has already been done: they work. There are no vulnerabilites. They are able to communicate with stakeholders (management) using their native language, not technobabble that human developers like to use, so they understand the business needs natively.

array_key_first an hour ago | parent | next [-]

If this is the case then none of us will have jobs; we will be completely useless.

I think, most likely, you'll still need developers in the mix to make sure the development is going right. You can't just have only business people, because they have no way to gauge if the AI is making the right decisions in regards to technical requirements. So even if the AI DOES get as good as you're saying, they wouldn't know that without developers.

bobthepanda an hour ago | parent | prev | next [-]

Humans are still in the loop as the final signoff responsible for liability, and to do an audit you’ll need someone who knows what they’re looking at.

SoftTalker an hour ago | parent [-]

Liabilty will be waived in the terms of use.

lelandbatey an hour ago | parent | prev [-]

> They work

For some definition of work, yes, not every definition. Their product is not without flaw, leaving room at for improvement, and room for improvement by more than only other AI.

> There are no vulnerabilities

That's just not true. There's loads of vulnerabilities, just as there's plenty of vulnerabilities in human written code. Try it, point an AI looking for vulns at the output of an AI that's been through the highest intensity and scrutiny workflow, even code that has already been AI reviewed for vulnerabilities.

2 hours ago | parent | prev | next [-]
[deleted]
jnovek 3 hours ago | parent | prev [-]

> Able to review the code output of coding agents

That probably won’t be necessary in a few years.

circlefavshape 3 hours ago | parent | next [-]

It's necessary for devs right now, no matter how good they are, and it's those devs' code the models are trained on

prewett 2 hours ago | parent [-]

Even worse, the training set probably includes a lot of code that needed review but didn't get it...

keeda 36 minutes ago | parent [-]

If we know the outcome of that code, such as whether it caused bugs or data corruption or a crappy UX or tech debt -- which is potentially available in subsequent PR commit messages -- it's still valuable training data.

Probably even more valuable than code that just worked, because evidently we have enough of that and AI code still has issues.

rafterydj 3 hours ago | parent | prev | next [-]

I see this line of thought put out there many times, and I've been thinking: why do people do anything at all? What's the point? If no one at all is even reviewing the output of coding agents, genuinely, what are we doing as a society?

I fail to see how we transition society into a positive future without supplying means of verifying systemic integrity. There is a reason that Upton Sinclair became famous: wayward incentives behind closed doors generally cause subpar standards, which cause subpar results. If the FDA didn't exist, or they didn't "review the output", society would be materially worse off. If the whole pitch for AI ends with "and no one will even need to check anything" I find that highly convenient for the AI industry.

ndriscoll 3 hours ago | parent [-]

You could e.g. write specs and only review high level types plus have deterministic validation that no type escapes/"unsafe" hatches were used, or instruct another agent to create adversarial blackbox attempts to break functionality of the primary artifact (which is really just to say "perform QA").

As a simple use-case, I've found LLMs to be much better than me at macro programming, and I don't really need to care about what it does because ultimately the constraint is just that it bends the syntax I have into the syntax I want, and things compile. The details are basically irrelevant.

surajrmal 2 hours ago | parent | next [-]

Code quality will impact the effectiveness of ai. Less code to read and change in subsequent changes is still useful. There was a while where I became more of a paper architect and stopped coding for a while and I realized I wasn't able to do sufficient code reviews anymore because I lacked context. I went back into the code at some point and realized the mess my team was making and spent a long while cleaning it up. This improved the productivity of everyone involved. I expect AI to fall into a similar predicament. Without first hand knowledge of the implementation details we won't know about the problems we need to tell the AI to address. There are also many systems which are constrained in terms of memory and compute and more code likely puts you up against those limits.

rafterydj 2 hours ago | parent | prev [-]

I mean, sure, for programming macros. Or programming quick scripts, or type-safe or memory-safe programs. Or web frontends, or a11y, or whatever tasks for which people are using AI.

But if you peel back that layer to the point where you are no longer discussing the code, and just saying "code X that does Y"... how big is X going to get without verifying it? This is a basic, fundamental question that gets deflected by evaluating each case where AI is useful.

When you stop being specific about what the AI is doing, and switch to the general tense, there is a massive and obvious gap that nobody is adequately addressing. I don't think anyone would say that details are irrelevant in the case of life-threatening scenarios, and yet no one is acknowledging where the logical end to this line of thinking goes.

falkensmaize 3 hours ago | parent | prev [-]

They will still be turning out the same problematic code in a few years that they do now, because they aren’t intelligent and won’t be intelligent unless there is a fundamental paradigm shift in how an LLM works.

I use LLMs with best practices to program professionally in an enterprise every day, and even Opus 4.6 still consistently makes some of the dumbest architectural decisions, even with full context, complete access to the codebase and me asking very specific questions that should point it in the right direction.

stevepotter 2 hours ago | parent [-]

I keep hearing “they aren’t intelligent” and spit out “crap code”. That’s not been my experience. LLMs prevented and also caught intricate concurrency issues that would have taken me a long time.

I just went “hmmm, nice” and went on. The problem there is that I didn’t get that sense of accomplishment I crave and I really didn’t learn anything. Those are “me” problems but I think programmers are collectively grappling with this.

rekrsiv 3 hours ago | parent | prev | next [-]

The endgame in programming is reducing complexity before the codebase becomes impossible to reason about. This is not a solved problem, and most codebases the LLMs were trained on are either just before that phase transition or well past it.

Complexity is not just a matter of reducing the complexity of the code, it's also a matter of reducing the complexity of the problem. A programmer can do the former alone with the code, but the latter can only be done during a frank discussion with stakeholders.

A vibe coder using an LLM to generate complexity will not be able to tell which complexity to get rid of, and we don't have enough training data of well-curated complexity for LLMs to figure it out yet.

trollbridge 3 hours ago | parent [-]

No kidding. So far the complexity introduced by LLM-generated code in my current codebase has taken far more time to deal with than the hand-written code.

Overall, we are trying to "silo" LLM-generated code into its own services with a well-defined interface so that the code can just be thrown away and regenerated (or rewritten by hand) because maintaining it is so difficult.

tpdly an hour ago | parent [-]

Yeah, same. I like the silo idea, I'll have to explore that.

I'm relieved to hear this because the LLM hype in this thread is seriously disorienting. Deeply convinced that coding "by hand" is just as defensible in the LLM age as handwriting was in the TTY age. My dopamine system is quite unconvinced though, killing me.

dspillett 3 hours ago | parent | prev | next [-]

> This is going to catch some heat, but what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents?

If it does go as far that way as many seem to expect (or, indeed, want), then most people will be able to do it, there will be a dearth of jobs and many people wanting them so it'll be a race to the bottom for all but the lucky few: development will become a minimum wage job or so close to that it'll make no odds. If I'm earning minimum wage it isn't going to be sat on my own doing someone else's prompting, I'll find a job that involves not sitting along in front of a screen and reclaim programming for hobby time (or just stop doing it at all, I have other hobbies to divide my time between). I dislike (effectively) being a remote worker already, but put up with it for the salary, if the salary goes because “AI” turns it into a race-to-the-bottom job then I'm off.

Conversely: if that doesn't happen then I can continue to do what I want, which is program and not instruct someone else (be it a person I manage or an artificial construct) to program. I'm happy to accept the aid of tools for automation and such, I've written a few of my own, but there is a line past which my interest will just vanish.

falkensmaize 3 hours ago | parent [-]

What the people excited about the race to the bottom scenario don’t seem to understand is that it doesn’t mean low skill people will suddenly be more employable, it means fewer high skill people will be employable.

No one will be eager to employ “ai-natives” who don’t understand what the llm is pumping out, they’ll just keep the seasoned engineers who can manage and tame the output properly. Similarly, no one is going to hire a bunch of prompt engineers to replace their accountants, they’ll hire fewer seasoned accountants who can confidently review llm output.

ArnoVW an hour ago | parent [-]

And those that do have not yet understood what will happen when those seasoned workers retire, and there are no juniors or mid that can grow because they have been replaced by AI

bonoboTP 3 hours ago | parent | prev | next [-]

I also remember a similar wave around 10-15 years ago regarding ML tooling and libraries becoming more accessible, more open source releases etc. People whose value add was knowing MATLAB toolboxes and keeping their code private got very afraid when Python numpy, scikit learn and Theano etc came to the forefront. And people started releasing the code with research papers on github. Anyone could just get that working code and start tweaking the equations put different tools and techniques together even if you didn't work in one of those few companies or didn't do an internship at a lab who were in the know.

Or other people who just kept their research dataset private and milked it for years training incrementally better ML models on the same data. Then similar datasets appeared openly and they threw a hissy fit.

Usually there are a million little tricks and oral culture around how to use various datasets, configurations, hyperparameters etc and papers often only gave the high level ideas and math away. But when the code started to become open it freaked out many who felt they won't be able to keep up and just wanted to keep on until retirement by simply guarding their knowledge and skill from getting too known. Many of them were convinced it's going to go away. "Python is just a silly, free language. Serious engineers use Matlab, after all, that's a serious paid product. All the kiddies stacking layers in Theano will just go away, it's just a fad and we will all go back to SVM which has real math backing it up from VC theory." (The Vapnik-Chervonenkis kind, not the venture capital kind.)

I don't want to be too dismissive though. People build up an identity, like the blacksmith of the village back in the day, and just want to keep doing it and build a life on a skill they learn in their youth and then just do it 9 to 5 and focus on family etc. I get it. But wishing it won't make it so.

Talented, skilled people with good intuition and judgements will be needed for a long time but that will still require adapting to changing tools and workflows. But the bulk of the workforce is not that.

poody 2 hours ago | parent [-]

This is so true... I am having issues with the change right now.. being older and trying to incorporate agentic workflow into MY workflow is difficult as I have trust issues with the new codebase.. I do have good people skills with my clients, but my secret sauce was my coding skilz.. and I built my identity around that..

dgb23 an hour ago | parent [-]

The cure for me has been to write an agent myself from first principles.

Tailored to my workflow, style, goals, projects and as close as possible to what I think is how an agent should work. I’m deliberately only using an existing agent as a rubber duck.

It’s a very empowering learning experience.

tonyedgecombe 4 hours ago | parent | prev | next [-]

Using a coding agent seems quite low skill to me. It’s hard to see it becoming a differentiator. Just look at the number of people who couldn’t code before and are suddenly churning out work to confirm that.

bachmeier 3 hours ago | parent [-]

> Using a coding agent seems quite low skill to me.

I agree if that's all you can do. Using a coding agent to complement a valuable domain-specific skill is gold.

nunez 2 hours ago | parent [-]

Thus why many technical business-facing people are super excited about AI (at the cost of developers)

veidr 2 hours ago | parent | prev | next [-]

It absolutely is, but the fundamental misunderstanding around this seems to be that "effectively using coding agents" is a superset of the 2023-era general understanding of "Senior Software Engineer".

At least when you're talking about shipping software customers pay for, or debugging it, etc. Research, narrow specializations, etc may be a different category and some will indeed be obsoleted.

mcdeltat 4 hours ago | parent | prev | next [-]

I think your argument is predicated on LLM coding tools providing significant benefit when used effectively. Personally I still think the answer is "not really" if you're doing any kind of interesting work that's not mostly boilerplate code writing all day.

dasil003 4 hours ago | parent | next [-]

Define interesting. In my experience most business logic is not innovative or difficult, but there are ways to do it well or ways to do it terribly. At the senior levels I feel 90% of the job is deciding the shape of what to build and what NOT to build. I find AI very useful in exploring and trying more things but it doesn’t really change the judgment part of the job.

xeromal 2 hours ago | parent | prev [-]

How much of software programmer work is interesting? A fraction of a percent? I'd argue most of us including most startups work on things that help make businesses money and that's pretty "boring" work.

windward 3 hours ago | parent | prev | next [-]

Many of those skills have temporary value before they're incorporated into the models/harnesses

ozozozd 3 hours ago | parent | prev | next [-]

There was a moment we thought JS had won. And then crypto. I personally believed low-level development was done.

nunez 2 hours ago | parent | next [-]

Claude Code is written in TypeScript, which compiles to JS, so I think JS _did_ win...

underlipton 3 hours ago | parent | prev [-]

Crypto did win, just not where you're looking.

MrDarcy 4 hours ago | parent | prev | next [-]

Not sure why this would catch heat rationally speaking. It is quite clear in a professional setting effective use of coding agents is the most important skill to develop as an individual developer.

It’s also the most important capability engineering orgs can be working on developing right now.

Software Engineering itself is being disrupted.

anticorporate 3 hours ago | parent | prev | next [-]

I'd offer an edit that the most important skill may be knowing when the agent is wrong.

There's so much hand wringing about people not understanding how LLMs work and not nearly enough hand wringing about people not understanding how computer systems work.

nunez 2 hours ago | parent | prev | next [-]

> This is going to catch some heat, but what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents?

Doing so will effectively force a (potentially unwanted) career change for many people and will lead to the end of software engineering (and software as a career), assuming AI continues to improve.

"Effectively" using agents means that you're writing specs and reading code (in batches through change diffs) instead of writing code directly. This requires the ability to write well (or well enough to get what you want from the agent) and clearly communicate intent (in your language of choice, not code; very different IMO).

The way that you read code is different with agents as well. Agents can produce a smattering of tests alongside implementation in a single turn. This is usually a lot of code. Thus, instead of red-green-refactor'ing a single change that you can cumulatively map in your head, you're prompt-build-executing entire features all at once and focusing on the result.

Code itself loses its importance as a result. See also: projects that are moving towards agentic-first development using agents for maintenance and PR review. Some maintainers don't even read their codebases anymore. They have no idea what the software is actually doing. Need security? Have an agent that does nothing but security look at it. DevOps? Use a DevOps agent.

This isn't too far off from what I was doing as a business analyst a little over 20 years ago (and what some technical product managers do now for spikes/prototypes). I wrote FRDs [^0] describing what the software should do. Architects would create TRDs [^1] from those FRDs. These got sent off to developers to get developed, then to QA to get bugs hammered out, then back to my team for UAT.

If agents existed back then, there would've been way fewer developers/QA in the middle. Architects would probably do a lot of what they would've done. I foresee that this is the direction we're heading in, but with agents powered by staff engineers/Enterprise Architects in the middle.

> Edit: as an aside, I have learned plenty from reviewing coding agent generated implementations of various algorithms or methods.

People learn differently. I (and others) learn from doing. Typing code from Stack Overflow/Expertsexchange/etc instead of pasting it, then modifying it is how I learned to code. Some can learn from reading alone.

[^0]: https://www.modernanalyst.com/Resources/Articles/tabid/115/I...

mxkopy 4 hours ago | parent | prev | next [-]

I don’t think it could be the most important skill to have. The most common, and the most standardized one for sure, but if coding agents are doing fundamental R&D or running ops then nobody needs skills anyway.

> As it turns out, neural nets “won”

> The people who scoffed at neural nets and never got up to speed not so much.

I get the feeling you don’t know what you’re talking about. LLMs are impressive but what have they “won” exactly? They require millions of dollars of infrastructure to run coming around a decade after their debut, and we’re really having trouble using them for anything all that serious. Now I’m sure in a few decades’ time this comment will read like a silly cynic but I bet that will only be after those old school machine learning losers come back around and start making improvements again.

shmerl 2 hours ago | parent | prev [-]

I'd say viewing it as most important is pretty unprofessional. But isn't it the point of this extreme AI push? To replace professional skills with dummy parrots.