Remix.run Logo
ramraj07 2 days ago

If you dont take a stand and refuse to clean their mess, aren't you part of the problem? No self respecting proponent of AI enabled development should suggest that the engineers generating the code are still not personally responsible for its quality.

tenacious_tuna 2 days ago | parent | next [-]

Ultimately that's only an option if you can sustain the impact to your career (not getting promoted, or getting fired). My org (publicly traded, household name, <5k employees) is all-in on AI with the goal of having 100% of our code AI generated within the next year. We have all the same successes and failures as everyone else, there's nothing special about our case, but our technical leadership is fundamentally convinced that this is both viable and necessary, and will not be told otherwise.

People who disagree at all levels of seniority have been made to leave the organization.

Practically speaking, there's no sexy pitch you can make about doing quality grunt work. I've made that mistake virtually every time I've joined a company: I make performance improvements, I stabilize CI, I improve code readability, remove compiler warnings, you name it: but if you're not shipping features, if you're not driving the income needle, you have a much more difficult time framing your value to a non-engineering audience, who ultimately sign the paychecks.

Obviously this varies wildly by organization, but it's been true everywhere I've worked to varying degrees. Some companies (and bosses) are more self-aware than others, which can help for framing the conversation (and retaining one's sanity), but at the end of the day if I'm making a stand about how bad AI quality is, but my AI-using coworker has shipped six medium sized features, I'm not winning that argument.

It doesn't help that I think non-engineers view code quality as a technical boogeyman and an internal issue to their engineering divisions. Our technical leadership's attitude towards our incidents has been "just write better code," which... Well. I don't need to explain the ridiculousness of that statement in this forum, but it undermines most people's criticism of AI. Sure, it writes crap code and misses business requirements; but in the eyes of my product team? That's just dealing with engineers in general. It's not like they can tell the difference.

sjducb a day ago | parent | next [-]

Hi thanks for this brilliant feature. It will really improve the product. However it needs a little bit more work before we can merge it into our main product.

1) The new feature does not follow the existing API guidelines found here: see examples an and b.

2) The new feature does not use our existing input validation and security checking code, see example.

Once the following points have been addressed we will be happy to integrate it.

All the best.

The ball is now in their court and the feature should come back better

This is a politics problem. Engineers were sending each other crap long before AI.

blandflakes 7 hours ago | parent | next [-]

Engineers also wrote good code before AI. We don't get to pretend that the speed increase of AI only increases the output of quality code - it also allows engineers to send much more crap!

parliament32 16 hours ago | parent | prev [-]

..so they copy/paste your message into Claude and send you back a +2000, -1500 version 3 minutes later. And now you get to go hunting for issues again.

sjducb an hour ago | parent [-]

If that happens then there’s an issue.

In the past I’ve hopped on a call with them and where I’ve asked them to show me it running. When it falls over I say here are the things the system should do, send me a video of the new system doing all of them.

The embarrassment usually shames them into actually checking that the code works.

If it doesn’t then you might have to go to the senior stakeholder and quietly demonstrate that they said it works, but it does not actually work.

You don’t want to get into a situation where “integrate” means write the feature while others get credit.

zaphar a day ago | parent | prev | next [-]

There is an alternative way make the necessary point here.. Let it go through with comments to the effect that you can not attest to the quality or efficacy of the code and let the organization suffer the consequences of this foray into LLM usage. If they can't use these tools responsibly and are unwilling to listen to the people who can, then they deserve to hit the inevitable quality wall Where endless passes through the AI still can't deliver working software and their token budget goes through the ceiling attempting to make it work.

ytoawwhra92 a day ago | parent [-]

I think you're falling victim to the just-world fallacy.

zaphar a day ago | parent [-]

I am absolutely certain the world isn't just. I'm also absolutely certain the world can't get just unless you let people suffer consequences for their decisions. It's the only way people can world.

ytoawwhra92 11 hours ago | parent [-]

IME that simply doesn't work in professional environments. People will either misrepresent the failure as a success or find someone else to pin the blame on. Others won't bother taking the time to understand what actually happened because they're too busy and often simply don't care. And if it's nominally your responsibility to keep something up, running, and stable then you're a very likely scapegoat if it fails. Which is probably why people are throwing stuff that doesn't work at you in the first place. Trying to solve the problem through politics is highly unlikely to work because if you were any good at politics you wouldn't have been in that situation in the first place.

diacritical a day ago | parent | prev | next [-]

> My org [...] is all-in on AI with the goal of having 100% of our code AI generated within the next year.

> People who disagree at all levels of seniority have been made to leave the organization.

So either they're right (100% AI-generated code soon) and you'll be out of a job or they'll be wrong, but by then the smart people will have been gone for a while. Do you see a third future where next year you'll still have a job and the company will still have a future?

gilbetron 20 hours ago | parent [-]

"100% AI-generated code soon" doesn't mean no humans, just that the code itself is generated by AI. Generating code is a relatively small part of software engineering. And if AI can do the whole job, then white collar work will largely be gone.

diacritical 12 hours ago | parent [-]

I agree, but it seems like if we can tell the AI "follow these requirements and use this architecture to make these features", we're a small step away from letting the AI choose the requirements, the architecture and the features. And even if it's not 100% autonomous, I don't see how companies will still need the same number of employees. If you're the lead $role, you'll likely stay, but what would be the use of anyone else?

LordGrey a day ago | parent | prev | next [-]

> ... I make performance improvements, I stabilize CI, I improve code readability, remove compiler warnings, you name it ...

These are exactly the kind of tasks that I ask an AI tool to perform.

Claude, Codex, et al are terrible at innovation. What they are good at is regurgitating patterns they've seen before, which often mean refactoring something into a more stable/common format. You can paste compiler warnings and errors into an agentic tool's input box and have it fix them for you, with a good chance for success.

I feel for your position within your org, but these tools are definitely shaking things up. Some tasks will be given over entirely to agentic tools.

tenacious_tuna 20 hours ago | parent [-]

> These are exactly the kind of tasks that I ask an AI tool to perform.

Very reasonable nowadays, but those were things I was doing back in 2018 as a junior engineer.

> Some tasks will be given over entirely to agentic tools.

Absolutely, and I've found tremendous value in using agents to clean up old techdebt with oneline prompts. They run off, make the changes, modify tests, then put up a PR. It's brilliant and has fully reshaped my approach... but in a lot of ways expectations on my efficiency are much worse now because leadership thinks I can rewrite our techstack to another language over a weekend. It almost doesn't matter that I can pass all this tidying off onto an LLM because I'm expected to have 3x the output that I did a year ago.

mentalgear a day ago | parent | prev | next [-]

Unfortunately not many companies seem to require engineers to cycle between "feature" and "maintainability" work - hence those looking for the low-hanging fruits and know how to virtue signal seem to build their career on "features" while engineers passionate about correct solutions are left to pay for it while also labelled as "inefficient" by management. It's all a clown show, especially now with vibe-coding - no wonder we have big companies having had multiple incidents since vibing started taking off.

caminante a day ago | parent [-]

Culture and accountability problems aren't limited to software.

It's best to sniff out values mismatches ASAP and then decide whether you can tolerate some discomfort to achieve your personal goals.

whiplash451 a day ago | parent | prev [-]

Shipping “quality only” work for a long time can be stressful for your colleagues and the product teams.

You’re much better off mixing both (quality work and product features).

tenacious_tuna 20 hours ago | parent [-]

> Shipping “quality only” work for a long time can be stressful for your colleagues and the product teams.

I buried the lede a bit, but my frustration has been feeling like _nobody_ on my team prioritizes quality and instead optimizes for feature velocity, which then leaves some poor sod (me) to pick up the pieces to keep everything ticking over... but then I'm not shipping features.

At the end of the day if my value system is a mismatch from my employer's that's going to be a problem for me, it just baffles me that I keep ending up in what feels like an unsustainable situation that nobody else blinks at.

oh_my_goodness a day ago | parent | prev | next [-]

"aren't you part of the problem?"

Yes? In the same way any victim of shoddy practices is "part of the problem"?

ramraj07 a day ago | parent [-]

Employees, especially ones as well leveraged and overpaid as software engineers, are not victims. They can leave. They _should_ leave. Great engineers are still able to bet better paying jobs all the time.

Aurornis a day ago | parent | next [-]

> Great engineers are still able to bet better paying jobs all the time

I know a lot of people who tried playing this game frequently during COVID, then found themselves stuck in a bad place when the 0% money ran out and companies weren’t eager in hiring someone whose resume had a dozen jobs in the past 6 years.

queenkjuul a day ago | parent | prev | next [-]

You obviously haven't gone job hunting in 2026

I hope you get the privilege soon

oh_my_goodness a day ago | parent | prev [-]

Employees are not victims. Sounds like a universal principle.

borski 2 days ago | parent | prev | next [-]

Came here to say this. The right solution to this is still the same as it always was - teach the juniors what good code looks like, and how to write it. Over time, they will learn to clean up the LLM’s messes on their own, improving both jobs.

Aurornis a day ago | parent | prev [-]

> and refuse to clean their mess

You can should speak up when tasks are poorly defined, underestimated, or miscommunicated.

Try to flat out “refuse” assigned work and you’ll be swept away in the next round of layoffs, replaced by someone who knows how to communicate and behave diplomatically.

arwhatever a day ago | parent [-]

ramraj07 went on to clarify that they were advocating for putting the onus for cleanup back on mess generators.

They clearly were not advocating for flat out refusing.