| ▲ | How Vibe Coding Is Killing Open Source(hackaday.com) |
| 67 points by msolujic 2 hours ago | 44 comments |
| |
|
| ▲ | observationist an hour ago | parent | next [-] |
| Things change. The barrier to entry decreased, meaning more things will get created, more people will participate in communal efforts, and quality will depend on AI capabilities and figuring out how to curate well - better tools, less friction between idea and reality, and things get better for everyone. Just because some things suck, for now, doesn't mean open source is being killed. It means software development is changing. It'll be harder to distinguish between a good faith, quality effort that meets all the expectations of quality control without sifting through more contributions. Anonymous participation will decrease, communities will have to create a minimal hierarchy of curation, and the web of trust built up in these communities will have to become more pragmatic. The relationships and the tools already exist, it's just the shape of the culture that results in good FOSS that will have to update and adapt to the technology. |
| |
| ▲ | ozim 10 minutes ago | parent | next [-] | | I think you have it backwards. Barrier to entry just went up, why would I use a library when I can ask LLM to make one for me. It shifts in a way where „left-pad” kind of thing will not happen because no one will need that kind of „library” because LLM will generate it. I see it as a positive thing, no single schmuck will be terrorizing whole ecosystem when there will be dozens of of different LLMs that can write such code. More people with shut in because they will be able to create something commercial or their „thing” won’t matter because LLM will be able to replicate their effort in 5 minutes so no one will be willing to pay for that. | |
| ▲ | LaurensBER an hour ago | parent | prev | next [-] | | I concur, open-source will be more reputation based and no doubt, in the future, LLMs can also act as a quality gate. I work a lot with quants (who can program but are more focused on making money than on clean-code) and Opus 4.5 and Kimi 2.5 are extremely good at giving them architecture guidance. They tend to overcomplicate some things but the result is usually miles better than what they produced without LLMs. | | |
| ▲ | blibble 40 minutes ago | parent [-] | | as we're doing anecdotes: I work with quants too their LLM "assisted" work seems to be the roughly the same quality (i.e. bad), but now there's much more of it not an improvement |
| |
| ▲ | reaperducer 6 minutes ago | parent | prev | next [-] | | The barrier to entry decreased, meaning more things will get created 57 Channels and Nothin' On https://en.wikipedia.org/wiki/57_Channels_(And_Nothin%27_On) | |
| ▲ | phatfish 11 minutes ago | parent | prev | next [-] | | I think it could be a good thing. The politics sucking the air out of projects and the entitled attitude from people that want something for free NOW was getting tiresome. Raising barriers against AI slop will also create a good reason to ignore demanding non-AI slop as well. It might give the real contributors to open source projects some breathing space. | |
| ▲ | tobyjsullivan 39 minutes ago | parent | prev | next [-] | | Further to this, the quality problem is affecting the entire industry, not just FOSS. Anyone working on a large enough team has already seen some contributors pushing slop. And while banning AI outright is certainly an option at a private company, it also feels like throwing out the baby with the bath water. So we’re all searching for a solution together, I think. There was a time (decades ago) when projects didn’t need to use pull requests. As the pool of contributors grew, new tools were discovered and applied and made FOSS (and private dev) a better experience overall. This feels like a similar situation. | |
| ▲ | tayo42 36 minutes ago | parent | prev [-] | | As I think about it, I think lowering the barrier to entry does generally ruin things The internet is worse off. The sports I participate in got cheaper to start with and are worse. Cultures worse. What has gotten better because the barrier to entry is lower? | | |
| ▲ | PKop 19 minutes ago | parent [-] | | Of course this is true but seems to be one of the most underrated facts of modern society. Always it is proposed without question to expand access to things, to "democratize" them, open barriers, open borders. But this invariably lowers quality while crowding out those already enjoying them. Think of any quality club, park, vacation spot, restaurant, online forum whatever: none of these are improved for your own usage of the thing by adding more people to it at least beyond some threshold. A lot of this is zero sum, and quality is in tension with quantity. |
|
|
|
| ▲ | arjie 2 hours ago | parent | prev | next [-] |
| It does seem like it's harming open source in a few ways: * no longer any pressure to contribute upstream * no longer any need to use a library at all * Verbose PRs created with LLMs that are resume-padding * False issues created with LLM-detection by unsophisticated users Overall, we've lost the single meeting place of an open-source library that everyone meets at so we can create a better commons. That part is true. It will be interesting to see what follows from this. I know that for very many small tools, I much prefer to just "write my own" (read: have Claude Code write me something). A friend showed me a worktree manager project on Github and instead of learning to use it, I just had Claude Code create one that was highly idiosyncratic to my needs. Iterative fuzzy search, single keybinding nav, and so on. These kinds of things have low ongoing maintenance and when I want a change I don't need to consult anyone or anything like that. But we're not at the point where I'd like to run my own Linux-compatible kernel or where I'd even think of writing a Ghostty. So perhaps what's happened is that the baseline for an open-source project being worthwhile to others has increased. For the moment, for a lot of small ones, I much prefer their feature list and README to their code. Amusing inversion. |
| |
| ▲ | umvi 2 hours ago | parent | next [-] | | > no longer any need to use a library at all As someone who works on medical device software, I see this as a huge plus (maybe a con for FOSS specifically, but a net win overall). I'm a big proponent of the go-ism "A little copying is better than a little dependency". Maybe we need a new proverb "A little generated code is better than a little dependency". Fewer dependencies = smaller cyberseucity burden, smaller regulatory burden, and more. Now, obviously foregoing libsodium or something for generated code is a bad idea, but probably 90%+ of npm packages could probably go. | | |
| ▲ | no_wizard an hour ago | parent | next [-] | | > probably 90%+ of npm packages could probably go I feel npm gets held to an unreasonable standard. The fact is tons of beginners across the world publish packages to it. Some projects publish lots of packages to it that only make sense for those projects but are public anyway then you have the bulwark pa lager that most orgs use. It is unfair to me that it’s always held as the “problematic registry”. When you have a single registry for the most popular language and arguably most used language in the world you’re gonna see massive volume of all kinds of packages, it doesn’t mean 90% of npm is useless FWIW I find most pypi packages worthless and fairly low quality but no ones seems to want to bring that up all the time | | |
| ▲ | rpodraza an hour ago | parent [-] | | I think you are completely oblivious to the problems plaguing the NPM ecosystem. When you start a typical frontend project using modern technology, you will introduce hundreds, if not thousands of small packages. These packages get new security holes daily, are often maintained by single people, are subject to being removed, to the supply chain attacks, download random crap from github, etc. Each of them should ideally be approved and monitored for changes, uploaded to the company repo to avoid build problem when it gets taken down, etc. Compare this to Java ecosystem where a typical project will get an order of magnitude fewer packages, from vendors you can mostly trust. | | |
| ▲ | danlitt an hour ago | parent [-] | | If these packages get security holes daily, they probably cannot "just go" as the parent comment suggested (except in the case of a hostile takeover). If they have significant holes, then they must be significant code. Trivial code can just go, but doesn't have any significant quality issues either. |
|
| |
| ▲ | macleginn an hour ago | parent | prev [-] | | Since code-generating AIs were likely trained on them, they won't go too far, though. |
| |
| ▲ | OGEnthusiast an hour ago | parent | prev | next [-] | | It's also now a lot easier to fork an open source project and tweak the last 10% so it works exactly as you want. | | |
| ▲ | gingerlime an hour ago | parent [-] | | Exactly. Whilst I can see the problem with vibe-coded "contribution" that lower the signal/noise ratio on big OSS project, it's also "liberating" in the sense that forking becomes much more viable now. If previously it took time to dive into a project to tweak it to your needs, it's now trivial. So in many senses AI is democratising open-source. |
| |
| ▲ | cosmic_cheese an hour ago | parent | prev | next [-] | | It may be worth considering how much the impact of LLMs is exacerbated by friction in the contribution process. Many projects require a great deal of bureaucracy, hoop-jumping, and sheer dogged persistence to get changes merged. It shouldn't be surprising if some are electing it easier to just vibe-customize their own private forks as they see fit, both skipping that whole mess and allowing for modifications that would've never been approved of in mainline anyway. | |
| ▲ | chrneu 2 hours ago | parent | prev [-] | | AI coding sort of reminds me of when ninite originally came out for windows. It was like a "build your own OS". Check boxes and get what you need in a simple executable. AI coding is kind of similar. You tell it what you want and it just sort of pukes it out. You run it then forget about it for the most part. I think AI coding is kind of going to hit a ceiling, maybe idk, but it'll become an essential part of "getting stuff done quickly". |
|
|
| ▲ | Lerc an hour ago | parent | prev | next [-] |
| I really don't like the narrative of 'X is killing Y', or 'Z is dead' Everything being treated as an existential threat. I'm also not particularly fond of the other extreme of toxic positivity where any problem is just a challenge and everybody is excited to take them on. Once seems to understate the level of agency people have and the other seems to overstate. The world is changing. Adapting does seem to be the rational approach. I don't think Open Source is being killed but it does need to manage the current situation in a way that provides the best outcome. I have been thinking that there may be merit in AI branches or forks. Open source projects direct any AI produced PRs to the AI branch. Maintainers of that branch curate the changes to send upstream. The maintainers of the original branch need not take an active involvement in the AI branch. If the AI branch is inadequately maintained or curated, then upstream simply receives no patches. In a sense it creates an opportunity for people who want to contribute. It produces a new area where people can compartmentalise their involvement without disrupting the wider project. This would lower the barrier of entry to productively supporting an open source project. I doubt the benefit of resume-padding will persist long in an AI world. By the very nature of their act, they are showing what they are claiming to do is unremarkable. |
| |
| ▲ | milowata an hour ago | parent [-] | | I actually started writing a very similar essay, but the hyperbole got too out of hand – open source isn't dying anytime soon. I do think that SDKs and utility-focused libraries are going to mostly go away, though, and that's less flashy but does have interesting implications imo. https://meelo.substack.com/p/a-mild-take-on-coding-agents | | |
| ▲ | Lerc 13 minutes ago | parent [-] | | I'm inclined to agree somewhat about libraries. I'm not entirely certain that it is a bad thing. Perhaps it would be more accurate to say libraries will change in form. There is a very broad spectrum of what libraries do. Some of the very small may just become purpose written inline code. Some of the large, hated-but-necessary libraries might get reduced into manageable chunks if people who use them can utilise AI to strip them down to the necessary component. Projects like that are things that are a lot of work for an individual that make it easier to just bite the bullet and use the bloated mass library. Getting an opportunity to make an AI do that drudge work might lower the threshold that some of those things will be improved. I also wonder about the idea of skills as libraries. I have already found that I am starting to put code into skills for the AI to use as templates for output. Developing code in this way would let you add the specific abilities of a library to any skill supporting AI. A simple is this https://htmlpreview.github.io/?https://github.com/Lerc/JustS... which was generated by a skill that contains the source for the image decoders within the skill itself. |
|
|
|
| ▲ | Flavius 2 hours ago | parent | prev | next [-] |
| Open Source isn't a tech stack or a specific way of typing syntax, it’s an ideology. It’s the belief that knowledge and tools should be free to share, study and modify. You cannot kill an idea. Whether I write a function by hand or 'vibe' it into existence with an LLM, the act of liberating that code for others to use remains the same. |
| |
| ▲ | AlexandrB 2 hours ago | parent [-] | | What's not the same is that the LLMs used to create the code are highly centralized and controlled. I suspect it's only a matter of time until the content industries start trying to restrict what code LLMs are allowed to produce so that you can't use an LLM to bypass DRM. | | |
| ▲ | charcircuit an hour ago | parent | next [-] | | There are competent open source LLMs out today. They are not highly centralized. | | | |
| ▲ | georgemcbay 22 minutes ago | parent | prev [-] | | > I suspect it's only a matter of time until the content industries start trying to restrict what code LLMs are allowed to produce so that you can't use an LLM to bypass DRM. I don't think this is a possibility anymore for multiple reasons. As others have already pointed out there are already "open models" available to use and that genie can't be put back in the bottle, restricting the commercial models wouldn't fix the issue. And secondly, I think the state of commercial LLMs show that the big tech companies behind LLMs have already become far more politically powerful than the traditional content industries. (I don't think this is good thing, but I think it is a thing). If you had explained the LLM situation to 15-years-ago me in terms of how they are trained (on almost entirely copyrighted material) and what kind of output they could generate and told me Disney hadn't managed (or really even tried) to sue various players out of existence I wouldn't have believed it, yet here we are. |
|
|
|
| ▲ | jph 2 hours ago | parent | prev | next [-] |
| I maintain multiple open source projects. In the past two months I've seen an uptick in AI-forgery attacks, and also an uptick in legitimate code contributions. The AI-forgery attacks are highly polished, complete with forged user photos and fake social networking pages. The legitimate code contributions are from people who have near-zero followers and no obvious track record. This is topsy-turvy yet good news for open source because it focuses the work on the actual code, and many more people can learn how to contribute. So long as code is good enough to get in the right ballpark for a PR, then I'm fine cleaning the work up a bit by hand then merging. IMHO this is a great leap forward for delivering better projects. |
|
| ▲ | charcircuit an hour ago | parent | prev | next [-] |
| >This also removes the typical more organic selection process of libraries and tooling, replacing it with whatever was most prevalent in the LLM’s training data Another article written by someone who doesn't actually use AI. Claude will literally search "XYZ library 2025" to find libraries. That is essentially equivalent to how it's always worked. It's not just what is in the dataset. |
| |
| ▲ | embedding-shape an hour ago | parent [-] | | > "XYZ library 2025" I'm fairly sure you made a typo, but considering the context, it's a pretty funny typo and would kind of demonstrate the point parent was trying to make :) I agree with you overall though, the CLI agents of today don't really suffer from that issue, how good the model is at using tools and understanding what they're doing is much more important than what specific APIs they remember from the training data. |
|
|
| ▲ | dom96 2 hours ago | parent | prev | next [-] |
| How many others are now reluctant to open source their code because they don't want it to end up in the training for an LLM? I certainly am. |
| |
| ▲ | dmarcos an hour ago | parent | next [-] | | It definitely feels less fun. Harder to get attribution, build a reputation, a community… Common driving forces for people to contribute to open source. | |
| ▲ | honestduane an hour ago | parent | prev | next [-] | | This is honestly why I have stopped contributing to open source. I was fine with my work being a gift for all of humanity equally, but I did not consent with it being a gift to a for-profit company that I'm not personally benefiting from, that wont even follow the spirit of the open source license. If AI doesn't have to follow the GPL, then I'm not going to create GPL code. | |
| ▲ | paodealho an hour ago | parent | prev | next [-] | | Me. I've never been a maintainer for any big opensource project, so it won't make a dent on anything, but now my contributions are exactly zero. | |
| ▲ | pelasaco an hour ago | parent | prev [-] | | some startups are already avoiding the open source route, exactly because of that. You publish your code, then 2 weeks later, we have dozen of "$PROJ in $LANG rewritten". 30000 LOC + super verbose README.md done in one week, in less than 10 commits, from somebody that never wrote a single line of OSS. |
|
|
| ▲ | Stevvo an hour ago | parent | prev | next [-] |
| Article doesn't seem to have anything new to add to the discussion. It's just a bunch of links to previous anti-AI articles the author has written on stories we have all read before such as the collapse in new stack overflow questions. |
|
| ▲ | ivan_gammel an hour ago | parent | prev | next [-] |
| I doubt it’s killing open source. The “too big to fail” software will be maintained no matter what, but the contribution model will change. It is not great, but we can live with it - majority of users of OSS never touch the code, so nothing is going to change for them. For a few enthusiasts the barrier will be higher, but we need some trust building incorporated in the process anyway. The small libraries will be eliminated as a viable solution for production use, but that’s a good thing. They are supply chain risk, which is significantly amplified in the LLM age. It may happen and it will be great if it happens, when open training datasets will replace those libraries to recalibrate LLM output and shift it from legacy to more modern approaches, as well as teaching how to achieve certain things. |
|
| ▲ | dmarcos an hour ago | parent | prev | next [-] |
| AI mediation between end dev and open source definitely reduces the incentives for maintainers that look to build community, visibility, reputation, collaborate with others… I also love AI so not sure what the solution could be. |
|
| ▲ | AtlasBarfed 38 minutes ago | parent | prev | next [-] |
| I understand that there's a lot of open source projects that are massive collaborations. There are also a lot of open source projects that are simply one-man shows. And llm should be massively helping those and I really don't see that so far. I would say they should be a massive gain to the open source community cuz let's face it. The people that do open source are simply going to be different than the people that just feed on it. Llm should be a massive enabler to open source. It should permit easy porting between architectures, programming languages and interfaces to a degree that simply wasn't possible before Again, I'm not really seeing that. |
|
| ▲ | jauntywundrkind 2 hours ago | parent | prev | next [-] |
| I think vibe coding might de-emphasize software as a end product. That part is more doable. But the general purpose machinery, the substrait we work on? That's hugely open source today, and will gladly accept and make use of that platform innovation that you can offer up. The authors talk about it being harder to get traction. And that's both true because of LLMs, and also, has been the case for a while now. Theres so much open source already, so many great tools, that it takes real effort and distinction to stand out & call attention to yourself. |
|
| ▲ | waynesonfire an hour ago | parent | prev | next [-] |
| I don't understand this article. ChatGPT made Google search and SO irrelevant so vibe coding is killing open source? ... That's a stretch. > The LLM will not interact with the developers of a library or tool, nor submit usable bug reports, or be aware of any potential issues no matter how well-documented. Arn't these interactions responsible for the claimed burn-out suffered by open-source maintainers? If you want interaction then, I don't know, go to a conference? Again, I don't get the issue. Seems like a good thing! Users are able to find answers and solutions to their quesitons more efficiently--all the while, still using the open-source library. The usage chart is still seeing tremendous growth! Developers are still using the library to solve their problems. It seems like exactly what open-source was intended for. The issue to me is that, the incentives for investing in open-source have changed for some maintainers in such a way that they're no longer in alignment with their return on their investment. Maybe there are fewere people interacting with them and so fewer people to discover how "great" they are. Maybe fewer eye balls on their resume. The point is, open-source was a means to an end. And, so, frankly, I don't give a shit. LLMs are making open-source technology accessible to more people and that's a good thing. |
|
| ▲ | pelasaco an hour ago | parent | prev | next [-] |
| Yes, it will kill open source—at least as we know it. I’m convinced that GitHub and GitLab will eventually stop offering their services for free if the flood of low-quality, "vibe-coded" projects—complete with lengthy but shallow documentation—continues to grow at the current rate. The trend of rewriting existing programs ("vibe-coding" a rewrite of $PROG in Rust, for example) threatens to undermine important, battle-tested projects like SQLite. As I described in this comment: https://news.ycombinator.com/item?id=46821246. I’m quite sure developers will increasingly close-source their work and black-box everything they possibly can. After all, source code that cannot be seen cannot be so easily "rewritten" by vibe-coders. |
|
| ▲ | RcouF1uZ4gsC 2 hours ago | parent | prev [-] |
| Open source and free software was the largest transfer of wealth in the form of techne freely from the craftspeople to the business people. Knowing how to write a database could make one fabulously rich. Now the person who knows how to make and promote a simple crud app backed my MySql becomes the rich one, while the db people beg for donations. Linux killed Sun/Solaris and SGI Irix Developers have voluntarily moved further down in the chain of value - now just describing themselves as primarily a business liaison who can translate to code. All the computer whispering necessary to do all this is freely available and digestible for free. LLMs are just the expected endpoint of this. |
| |