Remix.run Logo
I don't want your PRs anymore(dpc.pw)
195 points by speckx 5 hours ago | 119 comments
boricj 4 hours ago | parent | next [-]

I've been on both ends of this.

As the maintainer of ghidra-delinker-extension, whenever I get a non-trivial PR (like adding an object file format or ISA analyzer) I'm happy that it happens. It also means that I get to install a toolchain, maybe learn how to use it (MSVC...), figure out all of the nonsense and undocumented bullshit in it (COFF...), write byte-perfect roundtrip parser/serializer plus tests inside binary-file-toolkit if necessary, prepare golden Ghidra databases, write the unit tests for them, make sure that the delinked stuff when relinked actually works, have it pass my standards quality plus the linter and have a clean Git history.

I usually find it easier to take their branch, do all of that work myself (attributing authorship to commits whenever appropriate), push it to the master branch and close the PR than puppeteering someone halfway across the globe through GitHub comments into doing all of that for me.

Conversely, at work I implemented support for PKCS#7 certificate chains inside of Mbed-TLS and diligently submitted PRs upstream. They were correct, commented, documented, tested, everything was spotless to the implicit admission of one of the developers. It's still open today (with merge conflicts naturally) and there are like five open PRs for the exact same feature.

When I see this, I'm not going to insist, I'll move on to my next Jira task.

a1o 3 hours ago | parent | next [-]

> push it to the master branch and close the PR than puppeteering someone halfway across the globe through GitHub comments into doing all of that for me

While I understand the sentiment I am glad I got into open source more than fifteen years ago, because it was maintainers “puppeteering” me that taught me a lot of the different processes involved in each project that would be hard to learn by myself later.

toast0 3 hours ago | parent | next [-]

There's a balance though. Some people want to end up with a perfect PR that gets applied, some just want the change upstream.

Most of my PRs are things where I'm not really interested in learning the ins and outs of the project I'm submitting to; I've just run into an issue that I needed fixed, so I fixed it[1], but it's better if it's fixed upstream. I'll submit my fix as a PR, and if it needs formatting/styling/whatevs, it'll probably be less hassle if upstream tweaks it to fit. I'm not looking to be a regular contributor, I just want to fix things. Personally, I don't even care about having my name on the fix.

Now, if I start pushing a lot of PRs, maybe it's worth the effort to get me onto the page stylewise. But, I don't usually end up pushing a lot of PRs on any one project.

[1] Usually, I can have my problem fixed much sooner if I fix it, than if I open a bug report and hope the maintainer will fix it.

chongli 3 hours ago | parent | prev [-]

This is the Eternal September problem in disguise. That is, the personal interactions we treasure so much in small communities simply do not scale when the communities grow. If a community (or a project) grows too large then the maintainers / moderators can no longer spend this amount of time to help a beginner get up to speed.

matheusmoreira an hour ago | parent | prev [-]

> I usually find it easier to take their branch, do all of that work myself (attributing authorship to commits whenever appropriate), push it to the master branch and close the PR than puppeteering someone halfway across the globe through GitHub comments into doing all of that for me.

My most negative experiences with free and open source contributions have been like that. The one maintainer who engaged with me until my patches got into master was the best experience I ever had contributing to open source software to this day.

Pretty sad that people see engaging with other developers as "puppeteering someone halfway across the globe through GitHub comments"...

gogopromptless 40 minutes ago | parent | prev | next [-]

This is the best articulation of this viewpoint I have seen so far so I tip my hat to your writing skills.

I've been sitting with this post for a bit and something about the framing keeps bugging me.

I keep coming back to one question: are you planning to add Co-authored-by trailers to these commits?

The pitch is: don't send me code, send me the prompt you used to produce it. And on the surface that sounds like a lighter ask — a prompt is a small thing, one sentence maybe, versus a whole diff. But that's the sleight of hand. The prompt that produced a working patch is almost never one prompt. My PR is the compressed output of all of that work.

So "just send me the prompt" is doing two things at once. It's reframing the contributor's work as a small artifact (one prompt, how hard could that be), and it's reassigning the result of that work to whoever re-runs it. If I send you the actual thing that produced the patch — the whole transcript, the rejected attempts, the corrections — that's not a smaller contribution than a PR, it's a more complete one. And you're going to land the commit under your name.

This is the same shape of argument the model labs made about training data. Individual contribution is "too insignificant" to be worth crediting, but the aggregate is valuable and belongs to whoever assembled it. I don't love it there and I don't love it here.

The other thing I keep thinking about: to anyone who hasn't used these tools seriously, an engineer landing a high volume of clean commits under their own name looks like a very productive author. That gap between how it looks and what's actually happening is going to close eventually, but in the meantime it's a real incentive to set things up this way.

None of this is an argument against the workflow. Review is expensive, untrusted code is risky, I get it. I just want to make an argument for Co-authored-by being part of the deal.

eschneider 4 hours ago | parent | prev | next [-]

This seems...fine?

I know when I run into bugs in a project I depend on, I'll usually run it down and fix it myself, because I need it fixed. Writing it up the bug along with the PR and sending it back to the maintainer feels like common courtesy. And if it gets merged in, I don't need to fork/apply patches when I update. Win-win, I'd say.

But if maintainers don't want to take PR's, that's cool, too. I can appreciate that it's sometimes easier to just do it yourself.

hunterpayne 3 hours ago | parent | prev | next [-]

Somehow, this seems like a serious negative consequence of LLMs to me. We should consider how security patches move through the ecosystem. Changes like this are understandable but only because PRs from LLMs are so bad and prolific. When a new exploit is discovered, the number of sites that require a change goes up exponentially due to LLMs not using libraries. At the same time, the library contributors will likely not know to change their code in view of the new exploit. This doesn't seem like healing, more like being dissolved and atomized to the point of uselessness.

marcus_holmes 3 minutes ago | parent | next [-]

> When a new exploit is discovered, the number of sites that require a change goes up exponentially due to LLMs not using libraries

Conversely, if there's a supply chain attack in a library it's not being immediately spread to thousands of production servers.

comboy 3 hours ago | parent | prev [-]

Code changes are cheaper to make now and kind of more expensive to verify.

So you can still contribute, you just not need to provide the code, just the issue.

Which isn't as bad as it sounds, it kind of feels bad to rewrite somebody's code right away when it is theoretically correct, but opinionated codebases seem to work very well if the maintainer opinions are sane.

hunterpayne 2 hours ago | parent [-]

And if the maintainer doesn't understand something about how the exploit works? Also, code changes aren't cheaper, its just that you can watch YouTube instead of putting in effort now. But time still passes and that costs the same. Reviewing the code is far more expensive now though since the LLM won't use libraries.

PS The economics of software haven't really changed, its just that people (executives) wish they have changed. They misunderstood the economics of software before LLMs and they misunderstand the economics of software now.

PPS The only people that LLMs benefit are the segment of devs who are lazy.

bawolff 4 hours ago | parent | prev | next [-]

I think every maintainer should be able to say how they want or don't want others to contribute.

But i feel like it was always true that patches from the internet at large were largely more trouble then they were worth most of the time. The reason people accept them is not for the sake of the patch itself but because that is how you get new contributors who eventually become useful.

jcims 3 hours ago | parent | next [-]

> But i feel like it was always true that patches from the internet at large were largely more trouble then they were worth most of the time.

Oh god, I needed to add a feature to an open source project (kind of a freemium project) about fifteen years ago. I had no experience with professional software development nor did I have any understanding of pull requests. I sent one over after explaining what I was trying to do and that I thought it would be a good feature for the project.

Now they probably shouldn’t have just blindly merged it, but they did, and it really made a mess.

Learned a valuable lesson that day haha.

em-bee 2 hours ago | parent [-]

curious, what happened, and what did you learn?

if they merge something blindly, then it's really on them if it makes a mess.

akdev1l 3 hours ago | parent | prev [-]

I think the author of the article is missing this point.

When you actually work alongside people and everyone builds a similar mental model of the codebase then communication between humans is far more effective than LLMs.

For random contributions then this doesn’t apply

loeber 35 minutes ago | parent | prev | next [-]

I wrote an article a few months ago about how I expect open-source development to shift toward people no longer contributing code, but just contributing money, for the maintainers to purchase tokens to have AI ship a given feature request.

It wasn't popular but I think it will hold: https://essays.johnloeber.com/p/31-open-source-software-in-t...

sfjailbird 4 hours ago | parent | prev | next [-]

Given that submitters are just using LLMs to produce the PR anyway, it makes sense that the author can just run that prompt himself. Just share the 'prompt' (whether or not it is actually formatted as a prompt for an LLM), which is not too different than a feature request by any other name.

alxhslm 3 hours ago | parent | next [-]

Agree with this. Maybe we should start making PRs with the proposed spec and then the maintainer can get their agent to implement it.

This is similar to what we’ve started to do at work. The first stage of reviewing a PR is getting agreement on the spec. Writing and reviewing the code is almost the trivial part.

dpc_01234 2 hours ago | parent | prev | next [-]

Yes. At this point a prompt that produces the desired result is more useful than the resulting code in a PR. Effectively the code starts to have properties of a resulting binaries.

vlapec 4 hours ago | parent | prev | next [-]

+1

esafak 2 hours ago | parent | prev [-]

If your use of LLMs is limited to one shot prompts you must be new.

pkulak 4 hours ago | parent | prev | next [-]

Forking and coming back is what I like to do. At this very moment I've got a forked project that I'm actively using and making tiny changes to as things come up in my workflow. In another week or two, when I'm certain that everything is working great and exactly how I want it, I'll file an issue and ask if they are interested in taking in my changes; mostly as a favor to me so I don't have to maintain a fork forever.

socratic_weeb an hour ago | parent | prev | next [-]

This is sad. Human endeavors always benefit from the distributed contribution of the many, which keps everyone's egos and preferences in check while approaching the most optimal global solution in the limit. Every social institution works like this, and FOSS is no different. Centralization almost never works in social endeavours. Not to mention the dubious ethical implications of using (potenially stolen) LLM code in FOSS projects.

This might be controversial, but we need more Stallmans; that is, people who care about the ethical implications of tech, and not just the tech.

dataflow 35 minutes ago | parent | prev | next [-]

Off-topic question about this:

> It's increasingly apparent that "the source code" is less "source" and more "code"

(How) does this affect the GPL and what it defines as "source code" (the preferred means of modification to a program)?

pncnmnp 4 hours ago | parent | prev | next [-]

I have come to a similar realization recently - its what I call "Take it home OSS" - i.e. fork freely, modify it to your liking using AI coding agents, and stop waiting for upstream permissions. We seem to be gravitating towards a future where there is not much need to submit PRs or issues, except for critical bugs or security fixes. It's as if OSS is raw material, and your fork is your product.

boricj 3 hours ago | parent | next [-]

Recently I've been air-dropped into such a legacy project at work in order to save a cybersecurity-focused release date. Millions of lines of open-source code checked in a decade ago prior to the Subversion-to-Git migration, then patched everywhere to the point where diffs for the CVEs don't apply and we're not even sure what upstream versions best describe the forks.

By the end, the project manager begged me to turn off my flamethrower, as I was ripping it all out for a clean west manifest to tagged versions and stacks of patches. "Take it home OSS" is like take-out food: if you don't do your chores and leave it out for months or years on the kitchen counter, the next person to enter the apartment is going to puke.

tonyarkles 2 hours ago | parent [-]

> west manifest

Zephyr-based project?

boricj 2 hours ago | parent [-]

No, it predates it by a couple of decades.

But our modern embedded firmware projects all use Zephyr and west, so I just created a west manifest, stole parts of the scripts/ folder from the Zephyr repository to have a working "west patch" command and went to town. If I had more time to work on it, I'd have gotten "west build", "west flash" and "west debug" working too (probably with bespoke implementations) and removed the cargo cult shell scripts.

You can use west without Zephyr, it's just that by itself it only provides "west init" and "west update".

tonyarkles an hour ago | parent [-]

Awesome! Yeah, I haven't used west-sans-Zephyr yet but interestingly enough NXP decided to use it for their MCUXpresso framework: https://github.com/nxp-mcuxpresso/mcuxsdk-manifests/blob/mai...

I have mixed feelings about west in general, but I like it enough that I'd probably look at doing something like that in the future too for harmony-sake with our existing Zephyr projects.

rebolek 3 hours ago | parent | prev | next [-]

This is very shortsighted and it’s like polishing gun to shoot your foot with it.

If it’s "take it home OSS" and "there is not much need to submit PRs or issues" then why would anybody submit PRs and issues for "for critical bugs or security fixes"? If they have fix and it works for them, they’re fine, afterall.

And while we’re at it, why would anybody share anything? It’s just too much hassle. People will either complain or don’t bother at all.

I think that after few years, when LLM coding would be an old boring thing everybody’s used to and people will learn few hard lessons because of not sharing, we’ll come to some new form of software collaboration because it’s more effective than thinking me and LLM are better than me and LLM and thousands or millions people and LLMs.

fc417fc802 3 hours ago | parent [-]

> then why would anybody submit PRs and issues for "for critical bugs or security fixes"?

Why do they do that at present? There are plenty of cases where it's a hassle but people still do it, presumably out of a sense of common decency.

kajaktum an hour ago | parent | next [-]

I think people do that because they are closely involved with the code/plumbing. If they spend a week fixing a bug, they feel the weight of the changes that they made with every line that they wrote. If they just fixed the issue in passing and moved on to the next thing, I don't know if they would feel the same weight to contribute back.

More often than not, LLMs fixes an issue as a downstream user. So there's even less pressure to fix the issue. Because if library A does not work on Windows, it would just use mash together library B and C and something from itself to fix the work around it.

tonyarkles 2 hours ago | parent | prev [-]

> common decency

Another self-serving reason is so that you can upgrade in the future without having to worry about continually pulling your own private patch set forward.

dTal 4 hours ago | parent | prev | next [-]

Won't be much "raw material" left before long, if everyone takes that view.

fc417fc802 3 hours ago | parent [-]

Sure there will, as long as people continue to publish their work including the various forks. The community dynamic will change as will the workflows surrounding dependencies but the core principle will remain.

Vulnerability detection might prove to be an issue though. If we suddenly have a proliferation of large quantities of duplicate code in disparate projects detection and coordinated disclosure become much more difficult.

f33d5173 3 hours ago | parent | prev [-]

This has always been the case, and is really the main practical advantage of open source. Contributing code back is an act of community service which people do not always have time for. The main issue is that over time, other people will contribute back their own acts of community service, which may be bug fixes or features that you want to take advantage of. As upstream and your own fork diverge, it will take more and more work to update your patches. So if you intend to follow upstream, it benefits you to send your patches back.

ncruces 3 hours ago | parent | prev | next [-]

I agree with this.

Past month or so I implemented a project from scratch that would've taken me many months without a LLM.

I iterated at my own pace, I know how things are built, it's a foundation I can build on.

I've had a lot more trouble reviewing similarly sized PRs (some implementing the same feature) on other projects I maintain. I made a huge effort to properly review and accept a smaller one because the contributor went the extra mile, and made every possible effort to make things easier on us. I rejected outright - and noisily - all the low effort PRs. I voted to accept one that I couldn't in good faith say I thoroughly reviewed, because it's from another maintainer that I trust will be around to pick up the pieces if anything breaks.

So, yeah. If I don't know and trust you already, please don't send me your LLM generated PR. I'd much rather start with a spec, a bug, a failing test that we agree should fail, and (if needed) generate the code myself.

gavmor 4 hours ago | parent | prev | next [-]

> there's this common back-and-forth round-trip between the contributor and maintainer, which is just delaying things.

Delaying what?

pclowes 4 hours ago | parent [-]

Merging code

mogoh 4 hours ago | parent [-]

There would be no merge if there isn't a PR in the first place.

pclowes 4 hours ago | parent [-]

This isn’t about the PR. This is about the back-and-forth.

If the maintainer authors every PR they don’t have to waste time talking with other people about their PR.

clutter55561 4 hours ago | parent | prev | next [-]

If they are willing to feed a bug report to their LLM, then perhaps they can also feed a bug report + PR to their LLM and not make a big fuss out it.

Also, at the point they actively don’t want collaboration, why do open source at all?

Strange times, these.

dpc_01234 2 hours ago | parent | next [-]

If you read the post till the end, I am open for lots of forms of collaboration. Its just sending chunks of code diffs around is becoming increasingly like sending diffs to resulting binaries. Just inefficient.

singpolyma3 4 hours ago | parent | prev [-]

Open source is about sharing and forking, not collaboration.

Collaboration is a common pattern in larger projects but is uncommon in general

clutter55561 4 hours ago | parent [-]

A PR, at best, is collaboration, not a transaction.

mvvl 4 hours ago | parent | prev | next [-]

This is only going to get worse with LLMs. Now people can "contribute" garbage code at 10x the speed. We're entering the era of the "read only" maintainer focused on self-defense.

Gigachad 3 hours ago | parent | next [-]

I’ve seen it already where someone has set some fully automated agent at the GitHub issues and machine gunned PRs every minute for every reported issue. Likely never looked at or tested by the submitter.

Even if they worked, it would be easier for the maintainer to just do that themselves rather than review and communicate with someone else to resolve issues.

vlapec 3 hours ago | parent | prev [-]

...that assumes LLMs will contribute garbage code in the first place. Will they, though?

mvvl 3 hours ago | parent | next [-]

The problem isn't that it can't write good code. It's that the guy prompting it often doesn't know enough to tell the difference. Way too many vibe coders these days who can generate a PR in 5 seconds, but can’t explain a single line of it.

tonyarkles 3 hours ago | parent [-]

That’s 100% the trick to it all. I don’t always write code using LLMs. Sometimes I do. The thing that LLMs have unlocked for me is the motivation to put together really solid design documentation for features before implementing them. I’ve been doing this long enough that I’ve usually got a pretty good idea of how I want it to work and where the gotchas are, and pre-LLMs would “vibe code” in the sense that I would write code based on my own gut feeling of how it should be structured and work. Sometimes with some sketches on paper first.

Now… especially for critical functionality/shared plumbing, I’m going to be writing a Markdown spec for it and I’m going to be getting Claude or Codex to do review it with me before I pass it around to the team. I’m going to miss details that the LLM is going to catch. The LLM is going to miss details that I’m going to catch. Together, after a few iterations, we end up with a rock solid plan, complete with incremental implementation phases that either I or an LLM can execute on in bite-sized chunks and review.

biker142541 3 hours ago | parent | prev [-]

The LLM isn’t contributing garbage, the user is by (likely) not testing/verifying it meets all requirements. I haven’t yet used an LLM which didn’t require some handholding to get to a good code contribution on projects with any complexity.

samuelknight 4 hours ago | parent | prev | next [-]

> On top of that, there are a lot of personal and subjective aspects to code. You might have certain preferences about formatting, style, structure, dependencies, and approach, and I have mine.

Code formatting is easy to solve. You write linting tests, and if they fail the PR is rejected. Code structure is a bit tricker. You can enforce things like cyclomatic complexity, but module layout is harder.

ChrisMarshallNY 3 hours ago | parent | prev | next [-]

> Users telling me what works well and what could be improved can be very helpful.

That's a unicorn.

If I'm lucky, I get a "It doesn't work." After several back-and-forths, I might get "It isn't displaying the image."

I am still in the middle of one of these, right now. Since the user is in Australia, and we're using email, it is a slow process. There's something weird with his phone. That doesn't mean that I can't/won't fix it (I want to, even if it isn't my fault. I can usually do a workaround). It's just really, really difficult to get that kind of information from a nontechnical user, who is a bit "scattered," anyway.

arjie 3 hours ago | parent | prev | next [-]

Realistically, there are a much larger set of things that I don't mind forking these days. It is quite a bit of effort to get to a set of things that mostly don't have bugs, but in the past I might fork a few things[0] but these days, I vendor everything, and some things I just look at the feature list and have Claude rebuild it for me. So I totally understand why actual project maintainers and authors wouldn't want input. I, as a user, am not super eager to buy in to the actual maintainers' future work. It would be super surprising that they want to buy into strangers' work.

0: I liked BurntSushi's Rust projects since they are super easy to edit because they're well architected and fast by default. Easy to write in.

dpc_01234 2 hours ago | parent [-]

I'm currently running on a fork of Helix text editor, which I heavily gutted to replace the block cursor with a beam-style (like one in insert mode, but just all the time). Since the maintainers are drowning in PRs (472 open ATM), I understandably don't expect them to have time for my weird ideas. Then I pile on top whatever PRs I want that I find useful out of these 472, and with a little bit of LLM help I have a very different text editor than the upstream.

arjie an hour ago | parent [-]

That's exactly the same viewpoint I have.

How do you like Helix as a starting point? Currently, I'm having Claude write a little personal text editor with CodeEditTextView as a starting point and now that I saw your comment I suddenly realized I mostly like using a modal editor and only didn't do it here because I'm moving from a webpage (where Vimium style stuff never appealed to me). Good hint that. I wonder if neovim's server mode will be helpful to me.

dpc_01234 an hour ago | parent [-]

From UX perspective modal editing is all I care about and I just got used to Helix. And it's very feature complete and relatively stable, so hacking it and maintaining bunch of patches is not a lot of churn in practice.

porphyra 4 hours ago | parent | prev | next [-]

Maybe instead of submitting PRs, people should submit "prompt diffs" so that the maintainer can paste the prompt into their preferred coding agent, which is no doubt aware of their preferred styles and skills, and generate the desired commit themselves.

rebolek 3 hours ago | parent | next [-]

"Prompt diff" is just wish. Why not call it what it is? It’s perfectly fine to submit wishes, feature requests, RFCs or - if you want - "prompt diffs". But there’s no need for LLM to implement it, human can do it. Or not, LLM can. That’s not the point who implements it, it’s a wish.

acedTrex 4 hours ago | parent | prev [-]

Why would anyone bother doing this, prompts are not code, they are not shareable artifacts that give the same results.

travisjungroth 4 hours ago | parent | next [-]

Neither are bug reports or feature requests.

orangesilk 3 hours ago | parent | next [-]

bug reports should be reproducable. They may even be statistically reproduceable. A bug report that cannot be reproduced is worthless.

esafak 2 hours ago | parent [-]

It is not worthless; that means you need to work on making it easier to detect and report bugs.

mjmas 3 hours ago | parent | prev [-]

Do you accept bug reports that just say "it doesn't work" or do you require reproducibility?

fc417fc802 3 hours ago | parent | prev [-]

> Why would anyone bother doing this

For the same reason a PR can be useful even if it turns out to be imperfect. Because it reduces the workload for the maintainer to implement a given feature.

Obviously that means that if it looks likely to be a net negative the maintainer isn't going to want it.

freetime2 4 hours ago | parent | prev | next [-]

Couldn't you also just have an LLM review the PR and quickly fix any issues? Or even have it convert the PR into a list of specs, and then reimplement from there as you see fit?

I guess my point being that it's become pretty easy to convert back and forth between code and specs these days, so it's all kind of the same to me. The PR at least has the benefit of offering one possible concrete implementation that can be evaluated for pros and cons and may also expose unforeseen gotchas.

Of course it is the maintainer's right to decide how they want to receive and respond to community feedback, though.

warmwaffles 3 hours ago | parent | next [-]

> Couldn't you also just have an LLM review the PR and quickly fix any issues? Or even have it convert the PR into a list of specs, and then reimplement from there as you see fit?

Sometimes I'm not a fan of the change in its entirety and want to do something different but along the same lines. It would be faster for me to point the agent at the PR and tell it "Implement these changes but with these alterations..." and iterate with it myself. I find the back and forth in pull requests to be overly tiresome.

idiotsecant 4 hours ago | parent | prev [-]

Why not just have the LLM write the PR in the first place? Because LLMs are imperfect tools. At least for the foreseeable future the human in the loop is still important

freetime2 3 hours ago | parent [-]

I am assuming that, for the vast majority of code changes moving forward, the PR will be written by an LLM in the first place.

petetnt 3 hours ago | parent | prev | next [-]

Luckily we have had the perfect paradigm for this kind of mindset for decades: proprietary software. The spirit of open source is already essentially dead due to it being co-opted by companies and individuals working only for their own gain, and for it to rise again we probably need a total reset.

jerkstate 4 hours ago | parent | prev | next [-]

Thats fine, the cost for me to re-implement your code is nearly zero now, I don’t have to cajole you into fixing problems anymore.

OkayPhysicist 4 hours ago | parent | next [-]

This is obviously in an open source environment. You never needed to cajole them into fixing problems, you could just fix it yourself. That was always an option. That's literally the entire point of open source.

charcircuit 4 hours ago | parent [-]

People doing work doing work that you can take for free to make money off of is another big point of open source you can't ignore.

torvoborvo 4 hours ago | parent | prev | next [-]

It seems like quite a tower of babel just waiting to happen.. All those libraries that once had thought go into tangled consequences of supporting new similar features and once had ways to identity for their security updates needed will all just be defective clones with 5%-95% compatibility for security exploits and support for integrations that are mostly right but a little hallucinated?

Lerc 3 hours ago | parent [-]

I think it's more likely that libraries will give way to specified interfaces. Good libraries that provide clean interfaces with a small surface area will be much less affected by thos compared to frameworks that like to be a part of everything you do.

The JavaScript ecosystem is a good demonstration of a platform that is encumbered with layers that can only ever perform the abilities provivded by the underlying platform while adding additional interfaces that, while easier for some to use, frequently provide a lot of functionality a program might not need.

Adding features as a superset of a specification allows compatibility between users of a base specification, failure to interoperate would require violating the base spec, and then they are just making a different thing.

Bugs are still bugs, whether a human or AI made them, or fixed them. Let's just address those as we find them.

tshaddox 4 hours ago | parent | prev | next [-]

The cost of forking open source code was always effectively zero.

marcta 4 hours ago | parent [-]

It's not really, because you now have the cost of maintaining that fork, even if it's just for yourself.

bawolff 4 hours ago | parent | next [-]

Which is still true in our brave new llm world.

Lerc 3 hours ago | parent [-]

That may be part of the issue. Perhaps LLMs are just causing people to reveal how much they consider a maintainer as providing a service for them. Maintainers don't work for you, they let you benefit from the service they perform.

That workload of maintaining a fork doesn't come from nowhere, it's just a workload someone else would have to do before the fork occured.

tshaddox 3 hours ago | parent | prev [-]

I'm talking about the literal process of forking an open source project. You're just making a copy of a set of files.

pydry 4 hours ago | parent | prev | next [-]

Given the supposed quality of top flight models there ought to be a lot more people forking open source projects, implementing missing features and releasing "xyz software that can do a and b".

Somehow it's not really happening.

jaggederest 4 hours ago | parent | next [-]

I've actually been doing this for my own purposes - an adhoc buggy half-implemented low latency version of Project Wyoming from home assistant.

Repo, for those interested: https://github.com/jaggederest/pronghorn/

I find that the core issues really revolve around the audience - getting it good enough that I can use it for my own purposes, where I know the bugs and issues and understand how to use it, on the specific hardware, is fabulous. Getting it from there to "anyone with relatively low technical knowledge beyond the ability to set up home assistant", and "compatible with all the various RPi/smallboard computers" is a pretty enormous amount of work. So I suspect we'll see a lot of "homemade" software that is definitely not salable, but is definitely valuable and useful for the individual.

I hope, over the long to medium term, that these sorts of things will converge in an "rising tide lifts all boats" way so that the ecosystem is healthier and more vibrant, but I worry that what we may see is a resurgence of shovelware.

philipkglass 4 hours ago | parent | prev | next [-]

I have already forked open source software to fix issues or enhance it via coding agents. I put it on github publicly, so other people can use it if they see it, but I don't announce it anywhere. I don't want to deal with user complaints any more than the current maintainers do. (I'm also not going to post my github profile here since it has my legal name and is trivially linked to my home address.)

LostMyLogin 4 hours ago | parent | prev [-]

Because it still requires the desire to do it.

_verandaguy 4 hours ago | parent | prev [-]

This is an unethical take, and long-term and at scale, an unsustainable/impractical one. This kind of mindset results in tool fragmentation, erosion of trust, and ultimately worse quality in software.

GaryBluto 4 hours ago | parent [-]

So you're saying people forking open source software is "unethical"? What is open source then? Just a polite offer that it is rude to accept?

As a sidenote: what's with the usage of "take" to designate an opinion instead of the word "opinion" or "view"?

krick 4 hours ago | parent | prev | next [-]

It's good that he is upfront about it, but this surely shouldn't be taken as a general advice, since everybody has his own preferences. So this really shouldn't be a blogpost, but rather a "Contributing Guidelines" section in whatever projects he maintains.

caymanjim 3 hours ago | parent [-]

I firmly believe the author's stance should be the default policy of just about every open source project. I don't even write my own code anymore, I sure as hell don't want to deal with your code.

Give me ideas. Report bugs. Request features. I never wanted your code in the first place.

krick 3 hours ago | parent [-]

You believe. So it applies to projects you maintain. It doesn't mean it applies to project I maintain, or anybody else maintains. So this shouldn't be any more default than any other mode. And probably less default, since people generally developed other conceptions about "defaults" of etiquette in open source projects over the last 15 years.

caymanjim 2 hours ago | parent [-]

I'm not exactly disagreeing with you. I appreciate the author's stance, and I appreciate the blog post making it to HN. I think the author's stance should be widely adopted. If they had simply stuck their blog post into CONTRIBUTING.md in their repo, no one would have seen it. Now it's being more widely disseminated, and in a longer form with good reasoning attached.

cadamsdotcom 3 hours ago | parent | prev | next [-]

Yes, reviewing might take 1 hour but taking the PR and using it to guide an implementation also takes 1 hour.

Thank your contributor; then, use the PR - and the time you’d have spent reviewing it- to guide a reimplementation.

shell0x 2 hours ago | parent [-]

My coworkers just let Claude review the PR now instead of reading the code. It seems the entire contract is broken now.

Submitters use LLMs to generate the code and reviewers use LLMs to review it.

c0wb0yc0d3r 2 hours ago | parent [-]

> Submitters use LLMs to generate the code and reviewers use LLMs to review it.

This just like my favorite, “We can use LLMs to write the code and write the tests.”

Mathnerd314 4 hours ago | parent | prev | next [-]

The author sounds like he actually responds to feature requests, though. Typical behavior I'm seeing is that the maintainer just never checks the issue tracker, or has it disabled, but is more likely to read PR's.

perrygeo an hour ago | parent [-]

As it should be. Issues are a dime a dozen, sometime coherent, rarely relevant. There is no barrier to submitting a garbage issue.

PRs, at the very least, provide a quality gate. You have to a) express the idea in runnable code and b) pass the tests.

mactavish88 4 hours ago | parent | prev | next [-]

Great example of how to set boundaries. The open source community is slowly healing.

bawolff 4 hours ago | parent [-]

I don't think this is an example of setting boundries. Usually a boundry would be stopping people from making you do work you don't want to do.

This is just a change in position of what work is useful for others to do.

woeirua 4 hours ago | parent | prev | next [-]

I agree with this mindset. Instead of submitting code diffs, we should be submitting issues or even better tests that prove that bugs exist or define how the functionality should work.

vicchenai 3 hours ago | parent | prev | next [-]

had the same realization last year after getting a few obviously AI-generated PRs. reviewing them took longer than just writing it myself. maybe the right unit of contribution is going back to being the detailed bug report / spec, not the patch

tonymet 3 hours ago | parent | prev | next [-]

PRs come from your most engaged community members. By banning PRs you won’t get more contributions, you will discourage your (current and future) most active members.

globalnode an hour ago | parent | prev | next [-]

They're happy to get ideas from people to improve their product but not happy to help novices improve their coding skills and collaboration workflow. LLM's really are great and horrible at the same time.

acedTrex 4 hours ago | parent | prev | next [-]

If i do the work for a feature im usually already using it via fork, i offer the patch back out of courtesy. Up to you if you want it I'm already using it.

lou1306 4 hours ago | parent | prev | next [-]

> On top of that, there are a lot of personal and subjective aspects to code. You might have certain preferences about formatting, style, structure, dependencies, and approach, and I have mine.

95% of this is covered by a warning that says "I won't merge any PR that a) does not pass linting (configured to my liking) and b) introduces extra deps"

> With LLMs, it's easier for me to get my own LLM to make the change and then review it myself.

So this person is passing on free labour and instead prefers a BDFL schema, possibly supported by a code assistant they likely have to pay for. All for a supposed risk of malice?

I don't know. I never worked on a large (and/or widely adopted) open-source codebase. But I am afraid we would've never had Linux under this mindset.

xantronix 4 hours ago | parent | next [-]

I'm with the author here; I don't really feel like dealing with people's PRs on my personal projects. The fact that GitHub only implemented a feature to disable PRs in February is absolutely baffling to me, but I'm glad it's there. Just because a project's source code is made available to the public under a permissive license does not mean the maintainer is under any obligation to merge other people's changes.

It feels like a lot of people assume a sense of entitlement because one platform vendor settled on a specific usage pattern early on.

OkayPhysicist 4 hours ago | parent | prev | next [-]

> 95% of this is covered by a warning that says "I won't merge any PR that a) does not pass linting (configured to my liking) and b) introduces extra deps"

Maybe I'm not up to date with the bleeding edge of linters, but I've never seen one that adequately flags

    let out = []
    for(let x of arr){
      if(x > 3){
        out.append(x + 5)
      }
    }
Into

   let out = arr
             .filter(x => x > 3)
             .map(x => x + 3)
There's all sorts of architectural decisions at even higher levels than that.
well_ackshually 4 hours ago | parent [-]

Indeed, yours has both more allocations and a bug (+3 instead of +5)

sodapopcan 3 hours ago | parent [-]

More allocations is a good point but you're being pedantic about the bug... how do you know the +5 isn't the bug? :P

stavros 4 hours ago | parent | prev [-]

No. When code is cheaper to generate than to review, it's cheaper to take a (well-written) bug report and generate the code yourself than to try to figure out exactly what the PR does and whether it has any subtle logical or architectural errors.

I find myself doing the same, nowadays I want bug reports and feature requests, not PRs. If the feature fits in with my product vision, I implement and release it quickly. The code itself has little value, in this case.

teach 4 hours ago | parent [-]

I definitely trust my local LLM where I know the prompt that was used. Even if the code generated ends up being near-identical, it'll be way faster to review a PR from someone or something I trust than from some rando on the Internet

igorzuk 2 hours ago | parent | prev | next [-]

Bugs aside, code generated by an LLM is NOT more trustworthy than a drive-by PR, you should review them just as closely. The slop machine doesn't care, it will repeat whatever pattern it found on the Internet no matter who originally wrote it and with what intent. There have been attacks poisoning LLMs with malicious snippets and there will be many more.

vatsachak 3 hours ago | parent | prev | next [-]

LLM psychosis

cmrdporcupine 3 hours ago | parent | prev | next [-]

Yes I feel very much like what I really want from people is very detailed bug reports or well thought through feature requests and even well specified test scenarios [not in code, but in English or even something more specified]

I know my code base(s) well. I also have agentic tools, and so do you. While people using their tokens is maybe nice from a $$ POV, it's really not necessary. Because I'll just have to review the whole thing (myself as well as by agent).

Weird world we live in now.

guelo 4 hours ago | parent | prev | next [-]

It's interesting that this is the opposite of Steve Yegge's conclusion in his Vibe Maintainer article where he says he's merging 50(!) contributor PRs a day.

https://steve-yegge.medium.com/vibe-maintainer-a2273a841040

dpc_01234 an hour ago | parent | next [-]

My prediction is that there will be a Cambrian explosion-like event in software practices for 5-10 years (assuming continuous broad affordability availability), because LLMs can afford us a lot of freedom in trying a lot of new things including writing new tooling and changing the process. There's a lot to explore and I guess we'll collectively see what sticks.

warmwaffles 3 hours ago | parent | prev [-]

I am curious to watch this unfold. How long until a clever supply chain attack effects this? What will the response be? Will be interesting to see it.

grebc 3 hours ago | parent | prev | next [-]

Why bother having a public repository?

clutter55561 3 hours ago | parent [-]

My thoughts exactly.

yieldcrv 4 hours ago | parent | prev [-]

I like how fast this is changing

The fact-of-life journaling about the flood of code, the observation that he can just re-prompt his own LLM to implement the same feature or optimization

all of this would have just been controversial pontificating 3 months ago, let alone in the last year or even two years. But all of a sudden enough people are using agentic coding tools - many having skipped the autocomplete AI coders of yesteryear entirely - that we can have this conversation

dpc_01234 an hour ago | parent [-]

Author here. Here is my PoV on that evolution.

1 year ago Claude Code was relatively new, and a first polished tool that really fitted my CLI-centeric dev-views. I used Aider before, but Claude Code was just much better. The autocomplete AI coders did not seem useful, and didn't have good integrations with Helix text editor. However even the frontier models were relatively bad in practice. Useful but not trustworthy at all. Being wrong/stupid 5-10% of the time compounds quickly.

6 months ago agents became really robust at just writing code they were told to write. Around that time I started really leaning into LLM-assisted coding, which require some skill, experience and adapting own workflows and tooling. And it takes time and effort.

Right now frontier models are really productive and robust. Sure it's still a fancy-autocomplete under the hood, so one needs to plan around that, but it's more common for Slopus finds bugs in my old human-written code, than I find bugs in its new code, especially that one can now easily write and maintain tons of tests which otherwise would never get done. LLMs don't have context and good judgment, so it still takes a lot of designing and steering the agent to write the right thing, but that's OK. And as the productivity bottleneck shifted very heavily from writing code to all other thing around it, it makes it very apparent that it's not that the clanker now that needs to get better, but the process around it.