Remix.run Logo
tossandthrow 2 days ago

You are very strong on the "slop" bias. Why?

In managing a large to enterprise sized code base, I experience the opposite. I can guarantee a much more homogenous quality of the code base.

It is the opposite of slop I am seeing. And that at a lower cost.

Today,I literally made a large and complex migration of all of our endpoints. Took ai 30 minutes, including all frontends using these endpoints. Works flawlessly, debt principal down.

chaps 2 days ago | parent | next [-]

Which company do you work at so we can avoid your migrated endpoints?

bsmith 2 days ago | parent | next [-]

All big tech companies are mandating employees to use AI for tasks. Unless there's a similar movement to open source that is AI-free, you're going to need to be tech-free of you want to avoid companies that use AI.

tossandthrow 2 days ago | parent | prev [-]

Wtf. You don't even know what the migration was about?

chaps 2 days ago | parent [-]

I mean, I'm always down for learning something new. But I hope what I learn includes the name of the company I'd like to avoid.

tossandthrow 2 days ago | parent [-]

Your tone is in conflict with the statement that you are curious.

chaps 2 days ago | parent [-]

It's because you're deflecting. :)

tossandthrow 2 days ago | parent [-]

Deflecting from what? Telling the company name so you can avoid it due to your incredibly curious nature?

chaps 2 days ago | parent [-]

Sigh.

Look friend, I really hope you can realize how you sound in your post. You're extraordinarily confidently saying that you refactored some ambiguous endpoints in 30 minutes. Whenever I see someone act that confidently towards refactoring, thousands alarms go off in my head. I hope you see how it sounds to others. Like, at least spend longer than a lunch break on it with just a tad more diligence. Or hell, maybe even consider LIEing about how much time you spent on it. But my point is that your shortcuts will burn you. If you want to go down that path, I'm happy to be a witness to eventual schadenfreude.

My issue isn't with the fact that you used AI. My issue is with how confident you are that it worked well and exactly to spec. I'm very well aware of what these systems can do. Hell, I've been able to get postgres to boot inside linux inside postgres inside linux inside postgres recently with these tools. But I'm also acutely aware of the aggressive modes that these systems can break in.

So again, which company should we all avoid so that we can avoid your, specifically your, refactoring?

tossandthrow 2 days ago | parent | next [-]

I definitely did not say anything about ambiguous endpoints.

The migration was relatively straight forward and could likely have been implemented as automatic code transforms.

What I did say was that it was complex.

chaps 2 days ago | parent [-]

Yikes. Have a good one.

ath3nd 2 days ago | parent | prev [-]

[dead]

apsurd 2 days ago | parent | prev | next [-]

One point: yes, you're speaking from the power position. God-mode over a fleet of minions has always been an engineer's wet-dream. That's not even bad per-say. It's the collateral damage down stream that's at issue. Maybe you don't see any damage, but that's largely the point. Is it really up to you to say?

tossandthrow 2 days ago | parent [-]

What is the collateral damage? In ensuring that a bunch of endpoints use the same structure using LLMs?

apsurd 2 days ago | parent [-]

Let's not debate that it's possible to make very large very safe changes. It is possible that you did that.

This is about "slop bias". I'd wager that empowering everyone, especially power-positions to ship 50x more code will produce more code that is slop than not. You strongly oppose this because it's possible for you to update an API?

I'm stuck on the power-position thing because I'm living it. I'm pro-AI but there are AI-transformation waves coming in and mandating top-down. From their green-field position it's undeniable crush-mode killin' it. Maintenance of all kinds is separate and the leaders and implementors don't pay this cost. Maybe AI will address everything at every level. But those imposing this world assume that to be true, while it's the line-engineers and sales and customer service reps that will bear the reality.

tossandthrow 2 days ago | parent [-]

> Maybe AI will address everything at every level.

I think this is the idea you need to entertain / ponder more on.

I largely agree with you, what I don't agree with is the weighting about the individual elements.

My point was that I could do a 30 minutes cleanup in order to streamline hundreds of endpoints. Without AI I would not have been able to justify this migration due to business reasons.

We get to move faster, also because we can shorten deprication tails and generally keep code bases more fit more easily.

In particular, we have dropped the external backoffice tool, so we have a single mono repo.

An Ai does tasks all the way from the infrastructure (setting policies to resources) and all the way to the frontends.

Equally, if resources are not addressed in our codebase, we know at a 100% it is not in use, and can be cleaned up.

Unused code audits are being done on a weekly schedule. Like our sec audits, robustness audits, etc.

apsurd 2 days ago | parent [-]

Yeah, the more I debate the AI-lovers the more I can empathize with the possibility it may very well turn out to be everything is an Agent. Encodable.

I'm not a doomer either, but I do think this arc is a human arc: there's going to be a lot of collateral damage. To your point, Agents with good stewardship can also implement hygiene and security practices.

It's important we surface potential counter metrics and unintended side effects. And even in doing so the unknown unknowns will get us. With that said, I like this positive stewardship framing, I'll choose to see and contribute to that, thanks!

tossandthrow 2 days ago | parent [-]

I definitely don't identify as an AI lover. For me year 0 of Ai was February 6th 2026 and the release of Opus 4.6.

Until that day we had roughly zero Ai code in the code base (additions or subtractions). So in all reasonable terms I am a late adopter.

For code bases Ai does not concern me. We have for quite some time worked with systems that are too complex for single people to comprehend, so this is a natural extension of abstraction.

On the other hand, am super concerned about Ai and the society. The impact of human well being from "easy" Ai relations over difficult human connection. The continued human alienation and relational violation (I think the "woke" discourse will go on steroids).

I think society is going to be much less tolerant. And that frightens me.

skeeter2020 2 days ago | parent | prev | next [-]

>> Works flawlessly, debt principal down.

I don't doubt it completed the initial coding work in a short time, but the fact that you've equated that with flawless execution is on the concerning-scary spectrum. I can only assume you're talking "compiles-runs-ship it"

The danger is not generating obvious slop, it's accepting decent and convincing outputs as complete and absolving ourselves of responsibility.

tossandthrow 2 days ago | parent [-]

You are right, and it happens that the output looks decent.

Code idioms, or patterns if you will, is largely our solution.

We have small pattern/[pattern].md files througout the code base where we explain how certain things should be done.

In this case, the migration was a normalization to the specific pattern specified in the pattern file for the endpoints.

Semantics was not changed and the transform was straight forward. Just not task I would be able to justify spending time on from a business perspective.

Now, the more patterns you have, and the more your code base adheres to these patterns, the easier you can verify the code (as you recognize the patterns) and the easier you cal call out faulty code.

It is easier to hear an abnormality in music than in atmospheric noise. It is the same with code.

peterbell_nyc 2 days ago | parent | prev | next [-]

Seeing plenty of this. The quality of agentic code is a function of the quantity and quality of adversarial quality gates. I have seen no proof that an agentic system is incapable of delivering code that is as functional, performant and maintainable as code from a great team of developers, and enough anecdotes in the other direction to suggest that AI "slop" is going to be a problem that teams with great harnesses will be solving fairly soon if they haven't already.

apsurd 2 days ago | parent [-]

I take your point but then it makes me think is there no more value in diversity?

[Philosophy disclaimer] So in a code-base diversity is probably a bad idea, ok that makes sense. But in an agentic world, if everything is run through the Perfect Harness then humans are intentionally just triggers? Not even that, like what are humans even needed for? Everything can be orchestrated. I'm not against this world, this is an ideal outcome for many and it's not my place to say whether it's inevitable.

What I'm conflicted on is does it even "work" in terms of outcomes. Like have we lost the plot? Why have any humans at all. 1 person billion dollar company incoming. Software aside, is the premise even valid? 1 person's inputs multiplied by N thousand agents -> ??? -> profit

SturgeonsLaw 2 days ago | parent | next [-]

> Why have any humans at all

Why have humans do work at all? We could have a radically better existence. It would mean that the few at the top of the pyramid lose their privileged position relative to the rest of us, but we could, actually, have that world of abundance for all.

Work in the current sense arguably isn't even desirable

Maybe I've just read too many Culture books.

tossandthrow 2 days ago | parent | prev [-]

These are the right questions to ask.

hliyan 2 days ago | parent | prev | next [-]

> Today, I literally made a large and complex migration of all of our endpoints. Took ai 30 minutes, including all frontends using these endpoints. Works flawlessly, debt principal down.

This is either a very remarkable or a very frightening statement. You're claiming flawless execution within the same day as the change.

If you're unable to tell us which product this is, can you at least commit to report back in a month as to how well this actually went?

tossandthrow 2 days ago | parent [-]

It is a part of the smoke testing process right now.

But we run 90% test coverage, e2e test etc. None of which had been altered, and are all passing.

Migrations are generally not that high risk if you have a code base in alright shape.

bluecheese452 2 days ago | parent | prev [-]

Ironically the post saying it is not slop sounds exactly like ai slop.

tossandthrow 2 days ago | parent [-]

Too. Many spelling errors for that to be slop...