| ▲ | skeledrew 8 hours ago |
| I think this article is making a pretty big assumption: that people making things with AI are also going to be publishing them. And that's just the opposite of what should be expected, for the general case. Like I've been making things, and making changes to things, but I haven't published any of that because, well they're pretty specific to my needs. There are also things which I won't consider publishing for now, even if generally useful because, well the moat has moved from execution effort to ideas, and we all want to maintain some kind of moat to boost our market value (while there's still one). Everyone has reasonable access to the same capabilities now, so everyone can reasonably make what they need according to their exact specs easily, quickly and cheaply. So while there are many things being made with AI, there is ever-decreasing reasons to publish most of it. We're in an era of highly personalized software, which just isn't worth generalizing and sharing as the effort is now greater than creating from scratch or modifying something already close enough. |
|
| ▲ | chromacity 6 hours ago | parent | next [-] |
| > I think this article is making a pretty big assumption: that people making things with AI are also going to be publishing them. And that's just the opposite of what should be expected, for the general case. The premise is that AI has already fundamentally changed the nature of software engineering. Not some specific, personal use case, but that everything has changed and that if you're not embracing these tools, you'll perish. In light of this, I don't think your rebuttal works. We should be seeing evidence of meaningful AI contributions all over the place. |
| |
| ▲ | edgarvaldes 4 hours ago | parent [-] | | Hard agree. A 10x productivity increase would bleed outside the personal or internal use cases, even without effort. |
|
|
| ▲ | freedomben 7 hours ago | parent | prev [-] |
| Agree. There's also a weird ideological thing in open source right now, where any AI must be AI slop, and no AI is the only solution. That has strongly disincentivized legitimate contributions from people. I have to imagine that's having an impact. There's a very real problem of low effort AI slop, but throwing out the baby with the bathwater is not the solution. That said, I do kind of wonder if the old model of open source just isn't very good in the AI era. Maybe when AI gets a lot better, but for now it does take real human effort to review and test. If contributors were reviewing and testing like they should be doing, it wouldn't be an issue, but far too many people just run AI and don't even look at it before sending the PR. It's not the maintainers job to do all the review and test of a low-effort push. That's not fair to them, and even discarding that it's a terrible model for software that you share with anyone else. |
| |
| ▲ | skeledrew 7 hours ago | parent | next [-] | | > where any AI must be AI slop, and no AI is the only solution Yep, also a huge factor. Why publish something you built with an AI assistant if you know it's going to be immediately dunked on not because the quality may be questionable, but because someone sees an em-dash, or an AI coauthor, and immediately goes on a warpath? Heck I commented[0] on the attitude just a few hours ago. I find it really irritating. [0] https://github.com/duriantaco/fyn/issues/4#issuecomment-4117... | |
| ▲ | johnnyanmac 2 hours ago | parent | prev | next [-] | | >where any AI must be AI slop, and no AI is the only solution. AI as of now is like ads. Ads as a concept are not evil. But what it's done to everyday life is evil enough that I wouldn't flinch at them being banned/highly regulated one day (well, not much. The economic fallout would be massive, but my QoL would go way up). That's how I feel here. And looking at the PRs some popular repos have to deal with, we're well into the "shove this pop up ad with a tiny close button you can't reach easily" stage of AI. | |
| ▲ | kubanczyk 7 hours ago | parent | prev [-] | | You know what else strongly disincentivized legitimate contributions from people? Having your code snatched and its copyright disregarded, to the benefit of some rando LLM vendor. People can just press "pause" and wait until they see whether they fuel something that brings joy to the world. (Which it might in the end. Or not.) | | |
| ▲ | freedomben 4 hours ago | parent | next [-] | | For sure, that's legit too. I've had to grapple with that feeling personally. I didn't get to a great place, other than hoping that AI is democratized enough that it can benefit humanity. When I introspected deep enough, I realized I contributed to open source for two reasons, nearly equally: 1. To benefit myself with features/projects 2. To benefit others with my work 1 by itself would mean no bothering with PR, modifications, etc. It's way easier to hoard your changes than to go through the effort getting them merged upstream. 2 by itself isn't enough motivation to spend the effort getting up to speed on the codebase, testing, etc. Together though, it's powerful motivation for me. I have to remind myself that both things are a net positive with AI training on my stuff. It's certainly not all pros (there's a lot of cons with AI too), but on the whole I think we're headed for a good destination, assuming open models continue to progress. If it ends up with winner-takes-all Anthropic or OpenAI, then that changes my calculus and will probably really piss me off. Luckily I've gotten positive value back from those companies, even considering having to pay for it. | |
| ▲ | JasperNoboxdev 6 hours ago | parent | prev [-] | | Been going back and forth on this with open source tools I've built. The training data argument is valid, but honestly the more immediate version of the same problem is that someone can just take your repo, feed it to an agent, and have their own fork in an afternoon. The moat used to be effort, nobody wants to rewrite this from scratch (especially when it's free). What's left is actually understanding why the thing works the way it does. Not sure that's enough to sustain open source long-term? I guess we all have to get used to it? | | |
| ▲ | freedomben 4 hours ago | parent [-] | | > but honestly the more immediate version of the same problem is that someone can just take your repo, feed it to an agent, and have their own fork in an afternoon. Indeed, I've got a few applications I've built or contributed too that are (A)?GPL, and for those I do worry about this AI washing technique. For libraries that are MIT or permissive anyway, I don't really care. (I default to *GPL for applications, MIT/Apache/etc for libraries) |
|
|
|