| ▲ | hombre_fatal a day ago |
| You say "on a long enough timeline", but you already can't tell today in the hands of someone who knows what they're doing. I think a lot of anti-LLM opinions just come from interacting with the lowest effort LLM slop and someone not realizing that it's really a problem with a low value person behind it. It's why "no AI allowed" is pointless; high value contributors won't follow it because they know how to use it productively and they know there's no way for you to tell, and low value people never cared about wasting your time with low effort output, so the rule is performative. e.g. If you tell me AI isn't allowed because it writes bad code, then you're clearly not talking to someone who uses AI to plan, specify, and implement high quality code. |
|
| ▲ | datsci_est_2015 a day ago | parent | next [-] |
| > It's why "no AI allowed" is pointless … If you tell me AI isn't allowed because it writes bad code I disagree that the rule is pointless, and your last point is a strawman. AI is disallowed because it’s the manner in which the would-be contributors are attempting to contribute to these projects. It’s a proxy rule. Unfortunately for AI maximalists, code is more than just letters on the screen. There needs to be human understanding, and if you’re not a core contributor who’s proven you’re willing to stick around when shit hits the fan, a +3000 PR is a liability, not an asset. Maybe there needs to be something like the MMORPG concept of “Dragon Kill Points (DKP)”, where you’re not entitled to loot (contribution) until you’ve proven that you give a shit. |
| |
| ▲ | bombcar a day ago | parent | next [-] | | > Unfortunately for AI maximalists, code is more than just letters on the screen. There needs to be human understanding, and if you’re not a core contributor who’s proven you’re willing to stick around when shit hits the fan, a +3000 PR is a liability, not an asset. This isn't necessarily true; I've seen some projects absorb a PR of roughly that size, and after the smoke tests and other standard development stuff, the original PR author basically disappeared. It added a feature he wanted, he tested and coded it, and got it in. | | |
| ▲ | datsci_est_2015 a day ago | parent [-] | | So because some projects can absorb some PRs of a certain size, all projects of should be able to absorb PRs of that same size? This anecdotal argument is a dead end. The nuance is clear: not all software is the same, and not all edits to software are the same. | | |
| ▲ | ApolloFortyNine a day ago | parent [-] | | >So because some projects can absorb some PRs of a certain size, all projects of should be able to absorb PRs of that same size? Your argument has nothing to do with AI and more to do with PR size and 'fire and forget' feature merges. That's what the commenter your responding to is pointing out. | | |
| ▲ | datsci_est_2015 a day ago | parent [-] | | And my entire point is that LLM-generated feature requests are strongly correlated with high risk merge requests / pull requests, to which the commenter made no meaningful argument against. Instead the commenter chose to focus on the size of the PR and say “well I’ve seen it in the wild”. The way to get around this without getting all the LLM influencer bros in an uproar is to come up with a system that allows open source libraries to evaluate the risk of a PR (including the author’s ability to explain wtf the code does) without referencing AI because apparently it’s an easily-triggered community. | | |
| ▲ | hombre_fatal 2 hours ago | parent [-] | | Maybe you'll agree with another post I made about how UX/processes already fail us here (without LLMs) and they should be improved: https://news.ycombinator.com/item?id=47324816 I think that's the only shot at progress since it can address the general problem instead of trying to special-case unenforceable rules that you hope the lowest quality people follow. For example, a 3000+ line PR with no communication beforehand is already a low quality PR before AI. And it's one of the most annoying contributions to deal with since you have to basically tell them "sorry but all that work you did isn't acceptable". Yet they probably did all of it in earnest. Presumably you already have a policy where you accept random PRs for small tweaks like doc fixes, but you don't want unsolicited PRs that make substantial changes. So a rule against AI doesn't change anything there. And if you saw an uptick in large unsolicited PRs, then surely the solution is to update the process like disallow PRs that don't link to an issue. |
|
|
|
| |
| ▲ | darkwater a day ago | parent | prev | next [-] | | > and if you’re not a core contributor who’s proven you’re willing to stick around when shit hits the fan, a +3000 PR is a liability, not an asset. And in the context of high-value contributors that GP was mentioning, they are never going to land a +3000 PR because they know there is going to be a human reviewer on the other side. | |
| ▲ | pixl97 a day ago | parent | prev | next [-] | | >where you’re not entitled to loot (contribution) until you’ve proven that you give a shit. So what metric are you going to try to use to prove yourself? | |
| ▲ | sigseg1v a day ago | parent | prev | next [-] | | Vibe coded slop is a 50 DKP minus of course | |
| ▲ | cindyllm a day ago | parent | prev [-] | | [dead] |
|
|
| ▲ | nananana9 a day ago | parent | prev | next [-] |
| I don't see an issue here. You keep using AI to create high value contributions in the projects that accept it, I will keep not using it in mine, and we can see who wins out in 10 years. |
|
| ▲ | fwip a day ago | parent | prev | next [-] |
| > high value contributors won't follow it High-value contributors follow the rules and social mores of the community they are contributing to. If they intentionally deceive others, they are not high-value. |
| |
| ▲ | hombre_fatal 2 hours ago | parent | next [-] | | This is a good example of my point. Instead of progressing to a system resilient to the fact that you can't know how code was written, you've created a rule that, because it's unenforceable and deniable, must retreat to moralization about what someone does in private. That might make you feel good, but it won't work. | |
| ▲ | pixl97 a day ago | parent | prev [-] | | Ah, the no true Scotsman theory. | | |
| ▲ | thunderfork 19 hours ago | parent [-] | | Arguing that "doesn't secretly, sneakily break project rules" is an essential component of a quality contributor isn't a "no true scotsman" argument, it's a statement about qualifications | | |
| ▲ | pixl97 3 hours ago | parent [-] | | You see where this becomes a religious like argument right? Since it's secretly and sneakily there is no way to measure it. So as far as any other participant knows there is no measurable difference, hence your argument depends on said agents to be 'pure' and 'true', hence the exact definition of the no true Scotsman fallacy. I hope you see how this quickly will advance from a project being about accomplishing some goal, to a project becoming about humans showing they are the ones writing code. Much like we see in religions where people don't give money to the poor to benefit the poor, but show they give money to the poor to benefit themselves. Hence the game playing will continue and the underlying problem will never be addressed. |
|
|
|
|
| ▲ | beepbooptheory a day ago | parent | prev | next [-] |
| But then why have any contributions at all? Like its been years and years now, if all this is true, you'd think there would be more of a paradigm shift? I'm happy I guess waiting for Godot like everyone else, but the shadows are getting a little long now, people are starting to just repeat the same things over and over. Like, I am so tired now, it's causing such messes everywhere. Can all the best things about AI be manifest soon? Is there a timeline? Like what can I take so that I can see the brave new world just out of reach? Where can I go? If I could just even taste the mindset of the true believer for a moment, I feel like it would be a reprieve. |
| |
| ▲ | pixl97 a day ago | parent [-] | | > Where can I go? Off the internet. Maybe it's just time we all face the public internet is dead. Maybe a trusted private internet, though that comes with it's own risks and tradeoffs. Maybe we start doing PRs over mailed USB keys. Anyone with enough interest will do it, but it will cut out the bots. We're back to a 90's sneakernet. Any internet presence may become a read only site telling others how to reach you offline. The information superhighway died a long time ago. 4chan enlightened me on the power of intelligent stupidity. The machinations of a few smart people could embolden countless stupid people to cause nearly unlimited damage. Social media gathering up the smart and dumb alike allowed bullshit asymmetry to explode onto the scene and burned out anyone with a modicum of intelligence. |
|
|
| ▲ | lpcvoid a day ago | parent | prev [-] |
| All LLM-output is slop. There's no good LLM output. It's stolen code, stolen literature, stolen media condensed into the greatest heist of the 21. century. Perfect capitalism - big LLM companies don't need to pay royalties to humans, while selling access to a service which generates monthly revenue. |
| |
| ▲ | hombre_fatal a day ago | parent | next [-] | | Whether it trained on real world "stolen" code is an implementation detail. A controversial one, but it isn't a supporting argument for whether it can write high quality, functional code or not. | | |
| ▲ | jacquesm a day ago | parent [-] | | Sorry, but no, that is not a detail, that is a major sticking point for me. |
| |
| ▲ | __alexs a day ago | parent | prev | next [-] | | I came from a poor background and stole pretty much all the textbooks I used to learn programming as a kid. I also stole all the music I listened to while studying them. Is everything I write slop for the same reason? | | |
| ▲ | lpcvoid a day ago | parent [-] | | No. You're a human, who went through real life experiences. You learned, developed as a human being. You made mistakes and grew from them. You did what you have to do to advance. What you output has intrinsic value because of all this. I argue that even when you roll your face on your keyboard, the output is more valuable than ten pages of slop output from an LLM, since it's human, with all the history, experience, emotions and character which came before it. | | |
| ▲ | the_biot a day ago | parent | next [-] | | A quote from Neuromancer comes to mind: "But I ain't likely to write you no poem, if you follow me. Your AI, it just might. But it ain't no way human.”
| |
| ▲ | sigbottle a day ago | parent | prev | next [-] | | I don't know why this got downvoted. I've already been so frustrated by HN LIDAR mindsets but holy shit. Human society exists because we value humans, full stop. The easiest way to "solve" all of humanity's problems is to simply say that humans aren't valuable. Sometimes it feels like we're conceding a ridiculous amount of ground on that basic principle every year - one more human value gone because it "doesn't matter", so hey, we've obviously made progress! | | |
| ▲ | bigstrat2003 a day ago | parent | next [-] | | Agreed. I think that sometimes people on HN lose sight of what is actually important, which is human flourishing. The other day there was someone arguing that the best thing to do to fix loneliness problems in society is to remove the human need for socializing. Which... is certainly one way to fix the problem, I guess, but completely missed the point. The point is not to fix a mismatch between essential human desires and what we can attain, the point is to work on fulfilling those desires! Just something goes with nerd autism, I guess. | |
| ▲ | Fnoord a day ago | parent | prev | next [-] | | > I don't know why this got downvoted. I've already been so frustrated by HN LIDAR mindsets but holy shit The extreme sides (proponents, opponents) are clear, opposites, and fight each other. More nuanced takes get buried as droplets in a bucket. Likely a goal. > Human society exists because we value humans, full stop. Call me cynic, but I do not believe every human being agrees with this sentiment. From HR acting as if humans are resources, to human beings being dehumanized as workers, civilians, cannon fodder, and... well, the product. Every time human rights are violated, and we do not stand up to it, we lose. I have a very simple question as human right: the right for a human being to know the other side is a human being yes or no, and if not: to speak gratis (no additional fee allowed) to a human being instead. Futhermore, ML must always cite the used sources, and ML programmer is responsible for mistake. This would increase insurance costs so much, that LLM's in public would die, but SLM's could thrive. | |
| ▲ | pixl97 21 hours ago | parent | prev [-] | | >Human society exists because we value humans, full stop. Eh, human society exists because it is an emergent behavior of the evolutionary advantage afforded at the time of adoption by the human species. There is on iron rule stating that it must continue into the future, or even that it can exist into the future. More so, the value of a human has wildly fluctuated over history and culture. The village chief, nobles, the king were all high value humans. The villagers would be middle to low value, and others may be considered no value. The industrial age began to change this some as value started to move from the merchant class to the villager class as many high production jobs needed less and less training to complete. With industrialization businesses running machines and production lines needed as many people as they could get. Still human rights were hard fought in places like America where labor wars broke out. In the modern US we've setup a dangerous set of idealism that will most likely end in disaster because they are in conflict with general human values. That is the "pull yourself up by your bootstraps", "Any collective action is communism and communism will turn you into a pillar of salt if you dare look at it", and "greed is good". Couple that with TV media and social media owned by rich billionaires you're not going to see much serious opposition to these ideals. But if/as labor loses it's values, so will the humans that performed that labor. After decades of optimizing human society for maximal capital extraction, values are dead, and the ever present thought police owned by the rich will make sure you don't cause too much trouble by resurrecting them. |
| |
| ▲ | __alexs a day ago | parent | prev [-] | | The Neo-Victorian perspective of The Diamond Age is not a luxury most of us are going to be able to afford unfortunately. |
|
| |
| ▲ | mikkupikku a day ago | parent | prev | next [-] | | I'm fine with calling all LLM outputs slop, but I'll draw the line at asserting there's no good LLM output. LLM output is good when it works, and we can easily verify that a lot of code from LLMs does work. That the code LLMs output is derive of copyrighted works is neither here nor there. First of all, ALL creative work is derivative. Secondly IP is absurd horse shit and we never should have humored the premise of it being treated like real property. | |
| ▲ | sieep a day ago | parent | prev [-] | | Well put. Im gonna start parroting this talking point more from now on. | | |
| ▲ | ronsor a day ago | parent [-] | | And I thought being a stochastic parrot was limited to LLMs, but apparently they learned it from somewhere... |
|
|