Remix.run Logo
loeber 5 hours ago

This is a deeply pessimistic take, and I think it's totally incorrect. While I believe that the traditional open source model is going to change, it's probably going to get better than ever.

AI agents mean that dollars can be directly translated into open-source code contributions, and dollars are much less scarce than capable OSS programmer hours. I think we're going to see the world move toward a model by which open source projects gain large numbers of dollar contributions, that the maintainers then responsibly turn into AI-generated code contributions. I think this model is going to work really, really well.

For more detail, I have written my thoughts on my blog just the other day: https://essays.johnloeber.com/p/31-open-source-software-in-t...

matteotom 5 hours ago | parent | next [-]

Funding for open source projects has been a problem for about as long as open source projects have existed. I'm not sure I follow why you think specifying donations will go towards LLM tokens will suddenly open the floodgates.

loeber 5 hours ago | parent [-]

If you don't get it, then you should read the blog post and come back if you have questions.

matteotom 5 hours ago | parent | next [-]

I did. Your argument seems to be that LLMs allow users who want specific features to direct a donation specifically towards the (token) costs of developing that feature. But I don't see how that's any different from just offering to pay someone to implement the feature you want. In fact, this does happen, eg in the case of companies hiring Linux devs; but it hasn't worked as a general purpose OSS-funding mechanism.

loeber 16 minutes ago | parent [-]

Because offering to pay people to implement features is very expensive and tends to take a long time, if they do it at all. Often, they can't even find people to pay to implement things.

In the case of companies hiring Linux devs, that is is very, very costly and thereby inaccessible. Scale makes it different from the scenario of paying a few dollars to contribute tokens to fix a bug.

jscd 5 hours ago | parent | prev [-]

Wow, impressively insufferable

abrookewood 5 hours ago | parent | prev | next [-]

There are a few valid arguments that I see to support the pessimism:

1. When people use LLMs to code, they never read the docs (why would they), so they miss the fact that the open source library may have a paid version or extension. This means that open source maintainers will receive less revenue and may not be able to sustain their open source libraries as a result. This is essentially what the Tailwind devs mentioned.

2. Bug bounties have encouraged people to submit crap, which wastes maintainers time and may lead them to close pull requests. If they do the latter, then they won't get any outside help (or at least, they will get less). Even if they don't do that, they now have a higher burden than previously.

SoftTalker 3 hours ago | parent [-]

Bug bounties had this risk from day one. Any time you create a reward for something there will be people looking to game it for maximal personal benefit. LLMs and coding agents have just made it that much easier to churn out "vulnerability" reports and amplified it.

avaer 5 hours ago | parent | prev | next [-]

But locally, dollars are a zero-sum game. Your dollars came from someone else. If you make a project better for yourself without making it better for others you can possibly one-up others and make more dollars with it. If you make it better for everyone that's not necessarily the case. You're just diluting your money and soon enough you won't have money and you're eliminated from the race.

While I'd like to believe in the decency and generosity of humans, I don't get the economic case of donating money to the agent behind an OS project, when the person could spend the money on the tokens locally themselves and reap the exclusive reward. If it really is just about money that only makes sense.

Obviously this is a gross oversimplification, but I don't think you can ignore the rational economics of this, since in capitalism your dollars are earned through competition.

xyzzy123 5 hours ago | parent [-]

Would be cool if you could donate to maintainer's favourite bot to get bugs fixed.

Usually, getting stuff fixed on main is better than being forced to maintain a private fork.

voxl 5 hours ago | parent | prev | next [-]

Open source will ban AI, I'd bet $100 that AI will get banned more and more often entirely from large OSS

mythrwy 4 hours ago | parent [-]

How will they know who wrote the code?

lovich 5 hours ago | parent | prev | next [-]

Why would people/companies donate more money to open source in the future that they don’t already donate today?

It’s a tragedy of the commons problem. Most of the money available is not tied up to decision makers who are ideologically aligned with open source, so I don’t see why they’d donate any more in the future.

They usually do so because they are critically reliant on a library that’s going to die, think it’s good PR, makes engineers happy(don’t think they care about that anymore), or they think they can gain control of some aspect of industry(looking at you futurewei and the corporate workers of the Rust project)

loeber 5 hours ago | parent [-]

Because donating to open source projects today has an extremely unclear payoff. For example, I donate to KDE, which is my favorite Linux desktop environment. However, this does not have a measurable impact on my day-to-day usage of KDE. It's very abstract in that I'm making a tiny, opaque contribution to its development, but I have no influence on what gets developed.

More concretely, there are many features that I'd love to see in KDE which don't currently exist. It would be amazing if I could just donate $10, $20, $50 and submit a ticket for a maintainer to consider implementing the feature. If they agree that it's a feature worth having, then my donation easily covers running AI for an hour to get it done. And then I'd be able to use that feature a few days later.

sarchertech 5 hours ago | parent [-]

1. You can already do that it just costs more than $10.

2. Even assuming the AI can crap out the entire feature unassisted, in a large open source code base the maintainer is gonna to spend a sizeable fraction of the time reviewing and testing the feature as they would have coding it. You’re now back to 1.

Conceivably it might make it a little cheaper, but not anywhere close to the kind of money you’re talking about.

Now if agents do get so good that no human review is required, you wouldn’t bother with the library in the first place.

saimiam 5 hours ago | parent | next [-]

> Now if agents do get so good that no human review is required, you wouldn’t bother with the library in the first place.

The comment you responded to is (presumably) talking about the transition phase where LLMs can help implement but not fully deliver a feature and need human oversight.

If there are reasonably good devs in low CoL areas who can coax a new feature or bug fix for an open source project out of an LLM for $50, i think it’s worth trialling as a business model.

sarchertech 4 hours ago | parent [-]

Did you skip the first part of my comment where I specifically addressed that.

Even if the human is only doing review and QA, there’s no low cost of living area where $50 get you enough time to do those things from someone with enough competence to do them. Much less $10.

lovich 5 hours ago | parent | prev [-]

Yea, that’s the ideologically not aligned part I referenced.

If AI can make features without humans why would I, as a profit maximizing organization, donate that resource instead of keeping it in house? If we’re not gonna have human eyes on it then we’re not getting more secure, I don’t really think positive PR would exist for that, and it would deny competitors resources you now have that they don’t.

invalidname 4 hours ago | parent | prev | next [-]

As a maintainer of a medium size OSS project I agree. We've been running the produce for over a decade and a few years back Google came out with a competitor that pretty much sucked the air out of our field. It didn't matter that our product was better, we didn't have the resources to compete with a google hobby project.

As a result our work on the project got reduced to maintenance until coding agents got better. Over the past year I've rewritten a spectacular amount of the code using AI agents. More importantly, I was able to construct enterprise level testing which was a herculean task I just couldn't take up on my own.

The way I see it, AI brought back my OSS project that was heading to purgatory.

EDIT: Also about OPs post. It's really f*ing bug bounties that are the problem. These things are horrible and should die in fire...

kerkeslager 5 hours ago | parent | prev | next [-]

> AI agents mean that dollars can be directly translated into open-source code contributions, and dollars are much less scarce than capable OSS programmer hours.

I think this is true, but misses the point: quantity of code contributions is absolutely useless without quality. You're correct that OSS programmer hours are the most scarce asset OSS has, but AI absolutely makes this scarce resource even more scarce by wasting OSS programmers' time sifting through clanker slop.

There literally isn't an upside. The code produced by AI simply isn't good enough consistently enough.

That's setting aside the ethical issues of stealing other people's work and spewing even more carbon into the atmosphere.

Ygg2 5 hours ago | parent | prev | next [-]

Great.

Give money to maintainers? No.

Give money to bury maintainers in AI Slop? Yes.

Snakes3727 4 hours ago | parent | prev [-]

Hi I just wanted to let you know your article screams like it was written by AI as you fail to go into any real explanation for anything.

I can summarize your entire essay as frankly:

"We can give maintainers of OSS projects money to maintain projects" revolutionary never been done before. /S