Remix.run Logo
sethops1 12 hours ago

> The response for now? Junior and mid-level engineers can no longer push AI-assisted code without a senior signing off.

So basically, kill the productivity of senior engineers, kill the ability for junior engineers to learn anything, and ensure those senior engineers hate their jobs.

Bold move, we'll see how that goes.

whateveracct 12 hours ago | parent | next [-]

Juniors could just code things the old fashioned way. It isn't hard. And if they do find it too hard, they aren't cut out for this job.

sdevonoes 11 hours ago | parent | next [-]

But aren’t companies enforcing AI usage? If noy, wait for it

ritlo 11 hours ago | parent [-]

Mine's tracking it complete with a leaderboard (LOL) and it's been suggested to me that it'd be in my best interest not to be too low on that list, so I suspect in the back half of the year some sterner conversations and/or pink-slips are going to be coming the way of those who've not caught on that they need to at least be sending some make-work crap to their LLMs every day, even if they immediately throw the output in the metaphorical garbage bin.

It's basically an even-more-ridiculous version of ranking programmers by lines-of-code/week.

What's especially comical is I've seen enormous gains in my (longish, at this point) career from learning other tools (e.g. expanding my familiarity with Unix or otherwise fairly common command line tools) and never, ever has anyone measured how much I'm using them, and never, ever has management become in any way involved in pushing them on me. It's like the CEO coming down to tell everyone they'll be making sure all the programmers are using regular expressions enough, and tracking time spent engaging with regular expressions, or they'll be counting how many breakpoints they're setting in their debuggers per week. WTF? That kind of thing should be leads' and seniors' business, to spread and encourage knowledge and appropriate tool use among themselves and with juniors, to the degree it should be anyone's business. Seems like yet another smell indicating that this whole LLM boom is built on shaky ground.

tavavex 10 hours ago | parent | next [-]

> It's like the CEO coming down to tell everyone they'll be making sure all the programmers are using regular expressions enough, and tracking time spent engaging with regular expressions, or they'll be counting how many breakpoints they're setting in their debuggers per week.

That's because they weren't sold regex as as service by a massive company, while also being reassured by everyone that any person not using at least one regular expression per line of code is effectively worthless and exposes their business to a threat of immediate obsolescence and destruction. They finally found a way to sell the same kind of FOMO to a majority of execs in the software industry.

to11mtm 10 hours ago | parent | prev | next [-]

> even if they immediately throw the output in the metaphorical garbage bin.

Gotta be careful if you do that tho; e.x. Copilot can monitor 'accept' rate, so at bare minimum you'd have to accept the changes than immediately back them out...

tavavex 10 hours ago | parent | next [-]

In a couple years, we'll have office workspaces equipped with EEG helmets that you must wear while working, to measure your sentiment upon seeing LLM-generated code. The worst performers get the boot, so you better be happy!

ourmandave 10 hours ago | parent | prev | next [-]

I wonder if Copilot can write a commit and backout routine for them.

lovich 9 hours ago | parent | prev [-]

If you use AI to back it out, sounds like you’ve found an infinite feedback loop for those metrics.

Did industrial psychology die out as a field? Why do we keep reinventing the wheel when it comes to perverse incentives. It’s like working on a team working with scrum where the big bosses expect the average velocity to go up every sprint, forever, but the engineers are the ones deciding the point totals on tickets.

bonesss 10 hours ago | parent | prev | next [-]

From a management perspective I would be highly skeptics of token leaderboards. You are incentivizing people to piss away company money with uncertain rewards.

I mean… throw some docs into the context window, see it explode. Repeat that a few times with some multi-step workflows. Presto, hundreds of dollars in “AI” spending accomplishing nothing. In olden days we’d just burn the cash in a waste paper basket.

tren_hard 8 hours ago | parent [-]

My company doesn’t enforce AI usage but for those who choose to use it, every month they highlight the biggest users. It’s always non-tech people who absolutely don’t understand how LLMs work and just run a single chat for as long as possible before our system cuts them off and forces them into a new chat context.

dboreham 7 hours ago | parent [-]

"Can't fix stupid"

slopinthebag 10 hours ago | parent | prev [-]

What's stopping someone from just having the AI churn out garbage all day long? Or like, put your AI into plan mode with extra high reasoning and have it churn for 10 minutes to make a microscopic change in some source file. Repeat ad infinium.

baal80spam 7 hours ago | parent [-]

> What's stopping someone from just having the AI churn out garbage all day long?

In my case it's morality.

ummonk 4 hours ago | parent | next [-]

I would argue that making the company experience the consequences of its choice of metrics / mandates is in fact a moral imperative.

4 hours ago | parent [-]
[deleted]
bravetraveler 5 hours ago | parent | prev [-]

Interesting consideration, 'mandates' and all. Definitely in camp 'toss the output', here. I think I'll see 'morality' leaving when $EMPLOYER fires 'professional discretion'... forcing usage and, ultimately, debasing the position.

edit: Peer said it well, IMO. The consequences aren't really yours. Also: something, something, Goodhart's Law.

throw_m239339 11 hours ago | parent | prev | next [-]

Aren't these companies mandating the use of these tools at first place? Juniors aren't the problem.

thewhitetulip 11 hours ago | parent | prev [-]

Well, not when they are mandated to use AI tools and asked for justification about their usage!

I am saying in General, I've never worked in Amazon

dragonelite 12 hours ago | parent | prev | next [-]

Accelerate a person speed toward being burned out..

altairprime 12 hours ago | parent [-]

..and you lower overall engineering salary spend by rotating out seniority-paid engineers for newly-promoted AI reviewers with lower specs

dude250711 7 hours ago | parent | prev | next [-]

But Amazon is something you tolerate for a year or two early in the career, before moving somewhere better (which is anywhere else)?

almostdeadguy 12 hours ago | parent | prev [-]

I'm sorry what? Junior engineers can't learn anything without using AI assistants (or is the implication that having seniors review their code makes them incapable of learning?) and senior engineer would hate their jobs reviewing more code from their teammates? What reality do people live in now?

zdragnar 12 hours ago | parent | next [-]

I thought the implication was that juniors would continue to use AI to stay "productive" (AWS is not a rest and vest job for juniors, from what I've heard) and seniors would no longer have time to do anything but review code from juniors who just spin the AI wheel.

There's a lot of learning opportunity in failing, but if failure just means spam the AI button with a new prompt, there's not much learning to be had.

ritlo 12 hours ago | parent | prev [-]

> senior engineer would hate their jobs reviewing more code from their teammates

Jesus, yes. Maybe I'm an oddball but there's a limit to how much PR reviewing I could do per week and stay sane. It's not terribly high, either. I'd say like 5 hours per week max, and no more than one hour per half-workday, before my eyes glaze over and my reviews become useless.

Reviewing code is important and is part of the job but if you're asking me to spend far more of my time on it, and across (presumably) a wider set of projects or sections of projects so I've got more context-switching to figure out WTF I'm even looking at, yes, I would hate my job by the end of day 1 of that.

almostdeadguy 10 hours ago | parent [-]

If we can't spend that much time reviewing code, what are we exactly doing with this AI stuff?

I don't disagree, I think reviewing is laborious, I just don't see how this causes any unintended consequences that aren't effectively baked into using an AI assistant.

bluefirebrand 5 hours ago | parent [-]

Yes, this is part of why AI tools are bad

Code Review is hard and tiring, much moreso than writing it

I've never met anyone who would be okay reviewing code for their full time job