Remix.run Logo
jacquesm an hour ago

My rules of thumb is much shorter: don't.

The open source world has already been ripped off by AI the last thing they need is for AI to pollute the pedigree of the codebase.

sillysaurusx an hour ago | parent [-]

Suppose almost all work in the future is done via LLMs, just like almost all transportation is done today via cars instead of horses.

Do you think your worldview is still a reasonable one under those conditions?

lkjdsklf 35 minutes ago | parent | next [-]

But all work isn't done by LLMs at the moment and we can't be sure that it will be so the question is ridiculous.

Maybe one day it will be.. And then people can reevaluate their stance then. Until that time, it's entirely reasonable to hold the position that you just don't

This is especially true with how LLM generated code may affect licensing and other things. There's a lot of unknowns there and it's entirely reasonable to not want to risk your projects license over some contributions.

I use them all the time at work because, rightly or wrongly, my company has decided that's the direction they want to go.

For open source, I'm not going to make that choice for them. If they explicitly allow for LLM generated code, then I'll use it, but if not I'm not going to assume that the project maintainers are willing to deal with the potential issues it creates.

For my own open source projects, I'm not interested in using LLM generated code. I mostly work on open source projects that I enjoy or in a specific area that I want to learn more about. The fact that it's functional software is great, but is only one of many goals of the project. AI generated code runs counter to all the other goals I have.

logicprog 36 minutes ago | parent | prev | next [-]

I say let people hold this stance. We, agentic coders, can easily enough fork their project and add whatever the features or refinements we wanted, and use that fork for ourselves, but also make it available for others in case other people want to use it for the extra features and polish as well. With AI, it's very easy to form a good architectural understanding of a large code base and figure out how to modify it in a sane, solid way that matches the existing patterns. And it's also very easy to resolve conflicts when you rebase your changes on top of whatever is new from upstream. So, maintaining a fork is really not that serious of and endeavor anymore. I'm actually maintaining a fork of Zed with several additional features (Claude Code style skills and slash commands, as well as a global agents.md file, instead of the annoying rules library system, which I removed, as well as the ability to choose models for sub-agents instead of always inheriting the model from the parent thread; and yes, master branch Zed has subagents! and another tool, jjdag)

That seems like a win-win in a sense: let the agentic coders do their thing, and the artisanal coders do their thing, and we'll see who wins in the long run.

zozbot234 11 minutes ago | parent | prev | next [-]

LLMs are more like golf carts than cars. The horses will still be around for the foreseeable future.

bandrami 22 minutes ago | parent | prev [-]

That would only be a world where the copyright and other IP uncertainties around the output (and training!) of LLMs were a solved and known question. So that's not the world we currently live in.