Remix.run Logo
mikkupikku 10 hours ago

Subjecting every real contributor to the "AI guardian" would be unfair, and shadow banning is ineffective when you're dealing with a large number of drive-by nuisances rather than a small number of dedicated trolls. Public humiliation is actually a great solution here.

zimpenfish 9 hours ago | parent | next [-]

> Subjecting every real contributor to the "AI guardian" would be unfair

Had my first experience with an "AI guardian" when I submitted a PR to fix a niche issue with a library. It ended up suggesting that I do things a different way which would have to involve setting a field on a struct before the struct existed (which is why I didn't do that in the first place!)

Definitely soured me on the library itself and also submitting PRs on github.

johnisgood 10 hours ago | parent | prev | next [-]

How effective is it against people who just simply does not care?

notahacker 9 hours ago | parent | next [-]

I suspect people are doing it to pad their resume with "projects contributed to" rather than to troll the maintainers, so if they're paying any attention they probably do care...

mikkupikku 10 hours ago | parent | prev | next [-]

Most people do, and those who don't still get banned so...

metalman 9 hours ago | parent | prev [-]

what you say, is of course the only relavent issue. I can attest to my own experiences on both sides of this situation, one running a small business that is bieng inundated by job seekers who are sending AI written letters and resumes, and dealing with larger companys that have excess capacity to throw at work orders, but an inability to understand detail, AND, AND!, my own fucking need to survive in this mess, that is forceing me to dismiss certain niceties and adhearance to "proffesional" (ha!), norms. so while the inundation from people from India(not just), is sometimes irritating, I have also wrangled with some of them personaly, and under all that is generaly just another human, trying to make by best they can, so....

zoho_seni 9 hours ago | parent | prev [-]

You could easily guard against bullshit issues. So you can focus on what matters. If the issue is legit goes ahead to a human reviewer. If is run of the mill ai low quality or irrelevant issue, just close. Or even nicer: let the person that opened the issue to "argue" with the ai to further explain that is legit issue for false positives.

nchmy 9 hours ago | parent [-]

How is an llm supposed to identify an llm-generated bullshit issue...? It's the fox guarding the henhouse.

zoho_seni 8 hours ago | parent [-]

Just try and you'll see if it can work. Just copy paste some of these issues give context of the project and ask if makes sense