| > But the users would have to maintain their own forks then. I suppose the idea would be, they don't have to maintain it: if it ever starts to rot from whatever environmental changes, then they can just get the LLM to patch it, or at worst, generate it again from scratch. (And personally, I prefer writing code so that it isn't coupled so tightly to the environment or other people's fast-moving libraries to begin with, since I don't want to poke at all of my projects every other year just to keep them functional.) |
| The LLM can a priori test on all possible software and hardware environments, test all possible edge cases for deployment, get feedback from millions of eyes on the project explicitly or implicitly via bug reports and usage, find good general case use features given the massive amounts of data gathered through the community of where the project needs to go next, etc? Even in a world with pure LLM coding, it's more likely that LLMs maintain an open source place for other LLMs to contribute to. You're forgetting that code isn't just a technical problem (well, even if it was, that would be a wild claim that goes against all hardness results known to humans given the limits of a priori reasoning...) |
| |
| ▲ | LegionMammal978 2 days ago | parent | next [-] | | > The LLM can a priori test on all possible software and hardware environments, test all possible edge cases for deployment, get feedback from millions of eyes on the project explicitly or implicitly via bug reports and usage, find good general case use features given the massive amounts of data gathered through the community of where the project needs to go next, etc? Even if that's the ideal (and a very expensive one in terms of time and resources), I really don't think it accurately describes the maintainers of the very long tail of small open-source projects, especially those simple enough for the relevant features to be copied into a few files' worth of code. Like, sure, projects like Linux, LLVM, Git, or the popular databases may fit that description, but people aren't trying to vendor those via LLMs (or so I hope). And in any case, if the project presently fulfills a user's specific use case, then it "going somewhere next" may well be viewed as a persistent risk. | |
| ▲ | gf000 2 days ago | parent | prev [-] | | Yeah, the funny thing that Linux being open-source is absolutely in line with capitalism. Just look at the list of maintainers - they are almost all paid employees of gigacorps. It is just an optimization that makes sense -- writing an OS that is compatible with all sorts of hardware is hard, let alone one that is performant, checked for vulnerabilities, etc. Why would each gigacorp waste a bunch of money on developing their own, when they could just spend a tiny bit to improve a specific area they deeply care about, and benefit from all the other changes financed by other companies. | | |
| ▲ | m4rtink 2 days ago | parent [-] | | And the GPL makes it all work - as no single gigacorp can just take the whole and legally run with it for their gain, like they could if it was say MIT or BSD licensed. So you have direct competitors all contributing to a common project in harmony. | | |
| ▲ | gf000 2 days ago | parent [-] | | Well, GPL is good but I think this setup would still be a local optimum for gigacorps, were it MIT or so. They are using plenty of MIT libraries, e.g. Harfbuzz. It would just simply not make sense for them to let other companies' improvements go out of the window, unless they can directly monetize it. So it doesn't apply to every project, but especially these low-lying ones would be safe even without any sensible license. |
|
|
|