Remix.run Logo
freedomben 6 hours ago

Agree. There's also a weird ideological thing in open source right now, where any AI must be AI slop, and no AI is the only solution. That has strongly disincentivized legitimate contributions from people. I have to imagine that's having an impact.

There's a very real problem of low effort AI slop, but throwing out the baby with the bathwater is not the solution.

That said, I do kind of wonder if the old model of open source just isn't very good in the AI era. Maybe when AI gets a lot better, but for now it does take real human effort to review and test. If contributors were reviewing and testing like they should be doing, it wouldn't be an issue, but far too many people just run AI and don't even look at it before sending the PR. It's not the maintainers job to do all the review and test of a low-effort push. That's not fair to them, and even discarding that it's a terrible model for software that you share with anyone else.

skeledrew 5 hours ago | parent | next [-]

> where any AI must be AI slop, and no AI is the only solution

Yep, also a huge factor. Why publish something you built with an AI assistant if you know it's going to be immediately dunked on not because the quality may be questionable, but because someone sees an em-dash, or an AI coauthor, and immediately goes on a warpath? Heck I commented[0] on the attitude just a few hours ago. I find it really irritating.

[0] https://github.com/duriantaco/fyn/issues/4#issuecomment-4117...

johnnyanmac 30 minutes ago | parent | prev | next [-]

>where any AI must be AI slop, and no AI is the only solution.

AI as of now is like ads. Ads as a concept are not evil. But what it's done to everyday life is evil enough that I wouldn't flinch at them being banned/highly regulated one day (well, not much. The economic fallout would be massive, but my QoL would go way up).

That's how I feel here. And looking at the PRs some popular repos have to deal with, we're well into the "shove this pop up ad with a tiny close button you can't reach easily" stage of AI.

kubanczyk 6 hours ago | parent | prev [-]

You know what else strongly disincentivized legitimate contributions from people?

Having your code snatched and its copyright disregarded, to the benefit of some rando LLM vendor. People can just press "pause" and wait until they see whether they fuel something that brings joy to the world. (Which it might in the end. Or not.)

freedomben 2 hours ago | parent | next [-]

For sure, that's legit too. I've had to grapple with that feeling personally. I didn't get to a great place, other than hoping that AI is democratized enough that it can benefit humanity. When I introspected deep enough, I realized I contributed to open source for two reasons, nearly equally:

1. To benefit myself with features/projects

2. To benefit others with my work

1 by itself would mean no bothering with PR, modifications, etc. It's way easier to hoard your changes than to go through the effort getting them merged upstream. 2 by itself isn't enough motivation to spend the effort getting up to speed on the codebase, testing, etc. Together though, it's powerful motivation for me.

I have to remind myself that both things are a net positive with AI training on my stuff. It's certainly not all pros (there's a lot of cons with AI too), but on the whole I think we're headed for a good destination, assuming open models continue to progress. If it ends up with winner-takes-all Anthropic or OpenAI, then that changes my calculus and will probably really piss me off. Luckily I've gotten positive value back from those companies, even considering having to pay for it.

JasperNoboxdev 5 hours ago | parent | prev [-]

Been going back and forth on this with open source tools I've built. The training data argument is valid, but honestly the more immediate version of the same problem is that someone can just take your repo, feed it to an agent, and have their own fork in an afternoon.

The moat used to be effort, nobody wants to rewrite this from scratch (especially when it's free). What's left is actually understanding why the thing works the way it does. Not sure that's enough to sustain open source long-term? I guess we all have to get used to it?

freedomben 2 hours ago | parent [-]

> but honestly the more immediate version of the same problem is that someone can just take your repo, feed it to an agent, and have their own fork in an afternoon.

Indeed, I've got a few applications I've built or contributed too that are (A)?GPL, and for those I do worry about this AI washing technique. For libraries that are MIT or permissive anyway, I don't really care. (I default to *GPL for applications, MIT/Apache/etc for libraries)