Remix.run Logo
Nextgrid 4 hours ago

LLMs are only a threat if you see your job as a code monkey. In that case you're likely already obsoleted by outsourced staff who can do your job much cheaper.

If you see your job as a "thinking about what code to write (or not)" monkey, then you're safe. I expect most seniors and above to be in this position, and LLMs are absolutely not replacing you here - they can augment you in certain situations.

The perks of a senior is also knowing when not to use an LLM and how they can fail; at this point I feel like I have a pretty good idea of what is safe to outsource to an LLM and what to keep for a human. Offloading the LLM-safe stuff frees up your time to focus on the LLM-unsafe stuff (or just chill and enjoy the free time).

zeroonetwothree 4 hours ago | parent | next [-]

I see my job as having many aspects. One of those aspects is coding. It is the aspect that gives me the most joy even if it's not the one I spend the most time on. And if you take that away then the remaining part of the job is just not very appealing anymore.

It used to be I didn't mind going through all the meetings, design discussions, debates with PMs, and such because I got to actually code something cool in the end. Now I get to... prompt the AI to code something cool. And that just doesn't feel very satisfying. It's the same reason I didn't want to be a "lead" or "manager", I want to actually be the one doing the thing.

Nextgrid 4 hours ago | parent [-]

You won't be prompting AI for the fun stuff (unless laying out boring boilerplate is what you consider "fun"). You'll still be writing the fun part - but you will be able to prompt beforehand to get all the boilerplate in place.

archagon 24 minutes ago | parent [-]

If you’re writing that much boilerplate as part of your day to day work, I daresay you’re Doing Coding Wrong. (Virtue number one of programming: laziness. https://thethreevirtues.com)

Any drudgework you repeat two or three times should be encapsulated or scripted away, deterministically.

AstroBen 3 hours ago | parent | prev | next [-]

There are many tens (hundreds?) of billions of dollars being poured into the smartest minds in the world to push this thing forward

I'm not so confident that it'll only be code monkeys for too long

Nextgrid 3 hours ago | parent | next [-]

Until they can magically increase context length to such a size that can conveniently fit the whole codebase, we're safe.

It seems like the billions so far mostly go to talk of LLMs replacing every office worker, rather than any action to that effect. LLMs still have major (and dangerous) limitations that make this unlikely.

esafak 2 hours ago | parent [-]

Models do not need to hold the whole code base in memory, and neither do you. You both search for what you need. Models can already memorize more than you !

Jensson 2 hours ago | parent | next [-]

> Models do not need to hold the whole code base in memory, and neither do you

Humans rewire their mind to optimize it for the codebase, that is why new programmers takes a while to get up to speed in the codebase. LLM doesn't do that and until they do they need the entire thing in context.

And the reason we can't do that today is that there isn't enough data in a single codebase to train an LLM to be smart about it, so first we need to solve the problem that LLM needs billions of examples to do a good job. That isn't on the horizon so we are probably safe for a while.

esafak 2 hours ago | parent [-]

Getting up to speed is a human problem. Computers are so fast they can 'get up to speed' from scratch for every session, and we help them with AGENTS files and newer things like memories; e.g., https://code.claude.com/docs/en/memory

It is not perfect yet but the tooling here is improving. I do not see a ceiling here. LSPs + memory solve this problem. I run into issues but this is not a big one for me.

Nextgrid 2 hours ago | parent | prev [-]

I’ll believe it when coding agents can actually make concise & reusable code instead of reimplementing 10 slightly-different versions of the same basic thing on every run (this is not a rant, I would love for agents to stop doing that, and I know how to make them - with proper AGENTS.md that serves as a table of contents for where stuff is - but my point is that as a human I don’t need this and yet they still do for now).

Revanche1367 2 hours ago | parent [-]

In my experience they can definitely write concise and reusable code. You just need to say to them “write concise and reusable code.” Works well for Codex, Claude, etc.

Nextgrid 2 hours ago | parent [-]

Writing reusable code is of no use if the next iteration doesn’t know where it is and rewrites the same (reusable) code again.

munksbeer an hour ago | parent [-]

I guide the AI. If I see it produce stuff that I think can be done better, I either just do it myself or point it in the right direction.

It definitely doesn't do a good job of spotting areas ripe of building abstractions, but that is our job. This thing does the boring parts, and I get to use my creativity thinking how to make the code more elegant, which is the part I love.

As far as I can tell, what's not to love about that?

Nextgrid an hour ago | parent [-]

If you’re repeatedly prompting, I will defer to my usual retort when it comes to LLM coding: programming is about translating unclear requirements in a verbose (English) language into a terse (programming) language. It’s generally much faster for me to write the terse language directly than play a game of telephone with an intermediary in the verbose language for it to (maybe) translate my intentions into the terse language.

In your example, you mention that you prompt the AI and if it outputs sub-par results you rewrite it yourself. That’s my point: over time, you learn what an LLM is good at and what it isn’t, and just don’t bother with the LLM for the stuff it’s not good at. Thing is, as a senior engineer, most of the stuff you do shouldn’t be stuff that an LLM is good at to begin with. That’s not the LLM replacing you, that’s the LLM augmenting you.

Enjoy your sensible use of LLMs! But LLMs are not the silver bullet the billion dollars of investment desperately want us to believe.

AstroBen 36 minutes ago | parent [-]

> programming is about translating unclear requirements in a verbose (English) language into a terse (programming) language

Why are we uniquely capable of doing that, but an LLM isn't? In plan mode I've been seeing them ask for clarifications and gather further requirements

Important business context can be provided to them, also

philipwhiuk 2 hours ago | parent | prev [-]

> the smartest minds in the world

Dunning–Kruger is everywhere in the AI grift. People who don't know a field trying to deploy some AI bot that solves the easy 10% of the problem so it looks good on the surface and assumes that just throwing money (which mostly just buys hardware) will solve it.

They aren't "the smartest minds in the world". They are slick salesmen.

notnullorvoid 2 hours ago | parent | prev | next [-]

Agreed. Programming languages are not ambiguous. Human language is very ambiguous, so if I'm writing something with a moderate level of complexity, it's going to take longer to describe what I want to the AI vs writing it myself. Reviewing what an AI writes also takes much longer than reviewing my own code.

AI is getting better at picking up some important context from other code or documentation in a project, but it's still miles away from what it needs to be, and the needed context isn't always present.

jauntywundrkind an hour ago | parent | prev [-]

Yes. And I'm excited as hell.

But I also have no idea how people are going to think about what code to write when they don't write code. Maybe this is all fine, is ok, but it does make me quite nervous!

Nextgrid an hour ago | parent [-]

That is definitely a problem, but I would say it’s a problem of hiring and the billion-dollars worth of potential market cap resting on performative bullshit that encourages companies to not hire juniors to send a signal to capture some of those billions regardless of actual impact on productivity.

LLMs benefit juniors, they do not replace them. Juniors can learn from LLMs just fine and will actually be more productive with them.

When I was a junior my “LLM” was StackOverflow and the senior guy next to me (who no doubt was tired of my antics), but I would’ve loved to have an actual LLM - it would’ve handled all my stupid questions just fine and freed up senior time for the more architectural questions or those where I wasn’t convinced by the LLM response. Also, at least in my case, I learnt a lot more from reading existing production code than writing it - LLMs don’t change anything there.