Remix.run Logo
skeptic_ai 10 hours ago

Why you don’t just put an AI guardian to close or to ask them to change the story. Or shadow ban

mikkupikku 10 hours ago | parent | next [-]

Subjecting every real contributor to the "AI guardian" would be unfair, and shadow banning is ineffective when you're dealing with a large number of drive-by nuisances rather than a small number of dedicated trolls. Public humiliation is actually a great solution here.

zimpenfish 9 hours ago | parent | next [-]

> Subjecting every real contributor to the "AI guardian" would be unfair

Had my first experience with an "AI guardian" when I submitted a PR to fix a niche issue with a library. It ended up suggesting that I do things a different way which would have to involve setting a field on a struct before the struct existed (which is why I didn't do that in the first place!)

Definitely soured me on the library itself and also submitting PRs on github.

johnisgood 10 hours ago | parent | prev | next [-]

How effective is it against people who just simply does not care?

notahacker 10 hours ago | parent | next [-]

I suspect people are doing it to pad their resume with "projects contributed to" rather than to troll the maintainers, so if they're paying any attention they probably do care...

mikkupikku 10 hours ago | parent | prev | next [-]

Most people do, and those who don't still get banned so...

metalman 10 hours ago | parent | prev [-]

what you say, is of course the only relavent issue. I can attest to my own experiences on both sides of this situation, one running a small business that is bieng inundated by job seekers who are sending AI written letters and resumes, and dealing with larger companys that have excess capacity to throw at work orders, but an inability to understand detail, AND, AND!, my own fucking need to survive in this mess, that is forceing me to dismiss certain niceties and adhearance to "proffesional" (ha!), norms. so while the inundation from people from India(not just), is sometimes irritating, I have also wrangled with some of them personaly, and under all that is generaly just another human, trying to make by best they can, so....

zoho_seni 9 hours ago | parent | prev [-]

You could easily guard against bullshit issues. So you can focus on what matters. If the issue is legit goes ahead to a human reviewer. If is run of the mill ai low quality or irrelevant issue, just close. Or even nicer: let the person that opened the issue to "argue" with the ai to further explain that is legit issue for false positives.

nchmy 9 hours ago | parent [-]

How is an llm supposed to identify an llm-generated bullshit issue...? It's the fox guarding the henhouse.

zoho_seni 8 hours ago | parent [-]

Just try and you'll see if it can work. Just copy paste some of these issues give context of the project and ask if makes sense

blitzar 10 hours ago | parent | prev | next [-]

the only way to stop a bad guy with a llm is with a good guy with a llm

ironbound 9 hours ago | parent [-]

That's just shoveling money to tech companies

Hamuko 10 hours ago | parent | prev [-]

I intensely dislike the idea that we need more AI in order to deal with AI.

If I ever need to start using an AI to summarize text that someone else has generated with AI from a short summary, I'm gonna be so fucking done.

Sharlin 8 hours ago | parent | next [-]

Small brain: create a solution looking for a problem

Big brain: create a solution solving an existing problem

Galaxy brain: create a solution that creates its own problems

ezst 10 hours ago | parent | prev | next [-]

I relate, and then realized that's been the basis of spam handling for decades now. It's depressing, and we aren't putting this genie back in the bottle unfortunately.

danaris 9 hours ago | parent [-]

How so?

Spam, for decades, has been a matter of just shoveling truckloads of emails out the door and hoping that one or two get a gullible match.

Blocking spam, for decades, has been a matter of heuristic pattern-matching.

I don't see how that is the same as "fighting LLMs with LLMs", or how it could be said to be the same as how spam is made and used.

ezst 5 hours ago | parent [-]

It's analogous in the sense that tons of machine-submitted emails exist for the sole purpose of being mechanically triaged. The technology might be slightly different (it may not be LLMs through and through), but the pattern is the same.

chairmansteve 10 hours ago | parent | prev | next [-]

You're done dude. I'm sure it's already happening.

What are you going to do now?

Hamuko 10 hours ago | parent [-]

It's not happening because I'm not using an AI to summarize text. At the moment slop text is also fairly easy to recognise, so I can just ignore it instead.

zoho_seni 9 hours ago | parent | prev [-]

[flagged]