Remix.run Logo
spicyusername 2 hours ago

A lot of people down on AI in this thread, but I'm watching the industry slip over the line of trust with these latest frontier models. GPT 5.5 is the first model good enough for me to just let rip.

Every jira ticket I see now has acceptance criteria, reproduction steps, and detailed information about why the ticket exists.

Every commit message now matches the repo style, and has detailed information about what's contained in the commit.

Every MR now has detailed information about what's being merged.

Every code base in the teams around me now has 70 to 90%+ code coverage.

Every line of code now comes with best practices baked in, helpful comments, and optimized hot paths.

I regularly ship four features at a time now across multiple projects.

The MCP has now automated away all of the drudgery of programming, from summarizing emails, to generating confluence documentation, to generating slide decks.

People keep screaming that tech debt is going to pile up, but I think it's going to be exactly the opposite. Software is going to pile up because developing it is now cheap.

Most code before llms sucked. Most projects I on-boarded to were a massive ball of undocumented spaghetti, written by humans. The floor has been raised significantly as to what bad code can even look like, and fixing issues is now basically free if your company is willing to shell out for tokens.

neya 15 minutes ago | parent | next [-]

What you are describing is a the role of a manager, not a software engineer. Software engineering has very little to do with writing code, but more on architecting at the higher level on what needs to be done. The code is just the executional part. LLMs can code? Ok good. Without a clear architectural pathway / direction, that code is just useless. It's not tech debt. It's just a bunch of random strings. You can argue that Claude code and others do create a plan of attack - but still, it's not at the architectural level, but rather executional level.

To me, architecture starts all the way from the top - even before you write a single line of code, you do the DDD (Domain-Driven Design) and then create a set of rulesets (eg. use the domain name as table prefix) and contexts and then define the functionality w.r.t to that architecture. LLMs can do all this - only if you ask them to explicitly. So, they are pretty useful to brainstorm with, but not autonomously design reliably and push it to production with your eyes closed and support a 100,000 user base. It's a far cry from that.

But sure, you can upsell to management about the vanity metrics like lines of code and get that promotion with LLM. But, it's still not software engineering.

HarHarVeryFunny 17 minutes ago | parent | prev | next [-]

> Software is going to pile up because developing it is now cheap.

Software to do what, though ?!

Coding, maybe 10% of a developers job (Brooks "Silver Bullet" estimates 1/6), was never the bottleneck, and even if you automated that away entirely then you've only reduced development time by 10% (assuming you are not doing human code review etc).

I would also argue that software development as a whole (not just the coding part) was also typically never the bottleneck to companies shipping product faster, maybe also not for automating their business faster (internal IT systems), since the rest of the company is not moving that fast, business needs are not changing that fast, and external factors that might drive change are not moving that fast either.

I think that when the dust settles we'll find that LLM-assisted coding has had far less impact than those trying to sell it to us are forecasting. There will be exceptions of course, especially in terms of what a lone developer can do, or how fast a software startup can get going, but in terms of impact to larger established companies I expect not so much.

alrtkh an hour ago | parent | prev | next [-]

For people who like to tick boxes, which is essentially most of the above, AI is welcome. That includes managers.

It still has nothing to do with software engineering. All good code was written by humans. AI took it, plagiarizes it, launders it and repackages it in a bloated form.

Whenever I look deeply at an AI plagiarized mess, it looks like it is 90% there but in reality it is only 50%. Fixing the mess takes longer than writing it oneself.

zozbot234 an hour ago | parent | next [-]

The hard part of software engineering is turning a vague problem description into a set of box-ticking exercises. If ticking boxes became genuinely easier, the software engineering part is now a lot more valuable.

readitalready 42 minutes ago | parent | prev [-]

No fixing the mess definitely does not take longer than writing it oneself.

Your linter should identify all issues - including architectural and stylistic choices - and the AI agents will immediately repair them.

It's about 1000x faster than a human code at repairing its own mess.

applfanboysbgon 23 minutes ago | parent [-]

> Your linter should identify all issues - including architectural

If a linter could deterministically identify bad architecture, you wouldn't need an LLM, your linters could just write your code for you. The vibe coding takes are just getting more and more empty-headed...

hansmayer 19 minutes ago | parent [-]

> If a linter could deterministically identify bad architecture, you wouldn't need an LLM,

a) that's not what a linter is built for, its a tool with very specific role

b) You must've never seen LLM expose secrets in plain text or use the most convoluted scenarios you can think of.

duskdozer 8 minutes ago | parent | prev | next [-]

>I regularly ship four features at a time now across multiple projects.

Well, this explains why so much software nowadays is so slow, buggy, and chaotic.

agency 6 minutes ago | parent [-]

Unlike 3 years ago, when nobody complained about software being slow, buggy and chaotic

onlyrealcuzzo an hour ago | parent | prev | next [-]

> I regularly ship four features at a time now across multiple projects.

Many people are missing the fact that LLMs allow ICs to start operating like managers.

You can manage 4 streams now. Within a couple years, you may be able to manage 10 streams like a typical manager does today.

IME, LLMs don't speed you up that much if 1) you're already an expert at what you're doing (inherently not scalable), 2) you're only working on one thing (doesn't make sense when you can manage multiple streams), or 3) doing something LLMs are particularly bad it (not many remaining coding tasks, but definitely still some).

zozbot234 an hour ago | parent [-]

A manager doesn't have to look at the code that's being shipped. An IC will still need to do that, and this will eventually take up much of their work. It can be addressed by moving up the stack to higher level and more strictly checked languages, where there's overall less stuff to review manually.

onlyrealcuzzo 8 minutes ago | parent | next [-]

People typically think it's not a new person's fault if they come in to a team and bring down production.

That's a failure of the existing infrastructure to allow someone to do this.

LLM coding will work like this.

If you're letting LLMs go wild with no system in place to automatically know they're moving in the right direction and "shipping" things up to your standards, the failure is you, not the LLM.

jnwatson 18 minutes ago | parent | prev | next [-]

Just like a manager, you don't need to look at the code. You need to set up quality systems to provide evidence the code does what it is supposed to do, just like a manager.

hansmayer 21 minutes ago | parent | prev [-]

Spot on. When will the cretins understand, it's not about how much code you can generate.

nyxtom an hour ago | parent | prev | next [-]

I agree with most of this, I just have sort of turned a blind eye to what the code actually probably looks like. Reviews are rapid, and I’ll admit I do feel like I’m betraying my inner programmer by just optimizing directly against the claims of token bot. But the way I see it, as long as the numbers don’t lie I’m okay with the process.

p2detar an hour ago | parent | prev | next [-]

> I regularly ship four features at a time now across multiple projects.

Can that happen without you? I would assume this is the next step. I don't find it either good or bad, but I'm genuinely curious where this all goes.

kiba 42 minutes ago | parent | prev | next [-]

Everyone talks about productivity as if that is the only metric that matters in the business.

The MCP has now automated away all of the drudgery of programming, from summarizing emails, to generating confluence documentation, to generating slide decks.

I wonder about the hallucination. Reading someone's writing doesn't take all that long.

xantronix 22 minutes ago | parent [-]

> the drudgery of programming

Is programming supposed to suck all the time? Am I doing it wrong? I mean yeah, sure, it sucks sometimes, but overcoming that "suck" is where I feel progress and growth. If we decide to optimise that away...What the fuck am I doing here? No offence to managers, but if everybody is a manager, is anybody?

Forgeties79 7 minutes ago | parent [-]

Feels kind of like the problem of everybody wanting to be an entrepreneur in the 2010s. Just led to people basically trying to get paid to be middleman companies skimming from others that don’t really need them, or worse, selling supplements and life coaching or whatever on social media and other grifts.

mhitza an hour ago | parent | prev | next [-]

> GPT 5.5 is the first model good enough for me to just let rip.

You know this is the exact same thing said during Opus 4.6, right?

That makes it hard to believe because it's the same "last week's model was so much behind you can't even comprehend" meme that's been going on throughout last year.

More info dumped into tickets and projects is great for understanding for both people and LLM. But hopefully not LLM generated.

happytoexplain an hour ago | parent | prev | next [-]

I think numerically this is the exception - and it's a fantastic exception! But in practice what I've seen is things getting worse because people still just aren't very good at thinking, so the great-looking Jira ticket actually turns out to be nonsensical in some subtle way, whereas before it was just lacking in some obvious way that could immediately be called out and had an obvious solution.

I.e. it's making good output better, but it's making mediocre output (which is most output) worse by adding volume and the appearance of quality, creating a new layer of FUD, stress, tedium, and unhappiness on top of the previously more-manageable problems that come with mediocre output.

I'm still seeing this even with the newest models, because the problem is the user, not the model - the model just empowers them to be even worse, in a new and different way.

oblio an hour ago | parent | prev [-]

> Software is going to pile up because developing it is now cheap.

https://somehowmanage.com/2020/10/17/code-is-a-liability-not...

j16sdiz 18 minutes ago | parent [-]

Kind of like credit card.

Every american learns how to live with debt :)