Remix.run Logo
adornKey 5 hours ago

The question is if humans are any better.

Usually mentioning anything about doing proper epidemiology (e.g. analysing COVID numbers), or anything modern about atmosphere physics and climate-modelling gets taken down everywhere within 24 hours - by humans.

Mathematics and physics is something a lot of people don't like and really love to take down. Idiots censoring experts is a real problem. This place here has less idiots, but outnumbering experts with stupidity is something that works everywhere.

pixl97 4 hours ago | parent [-]

A random sampling of humans might be better. The problem with people that want to take things down and cause problems is they are not random. Brigaders, marketing agencies with an agenda, nation state propaganda teams, groups with religious motivations, idiots that have been propagandized to and think they are fighting the good war, all of these tip the scales away from user voting being useful on forums.

adornKey an hour ago | parent [-]

Current systems are indeed very vulnerable to professional manipulation - there is no real defence yet - and there are powerful players. But democratic sampling won't help much. Only the wrong guys are interested in voting - and once mass hysteria has set in any democratic majority will vote to censor anything that brings them out of their panic loop. Witch hunting lasted for hundreds of years.

I think just tagging things accordingly would be a lot better than raw censorship. In good old places of Usenet just tagging things as Spam worked quite well. Just filtering out some tags and putting some guys in a kill-file was good enough. But it required manual labour - and eventually that was too much. But with AI now I think tagging could be done efficiently.

If people like to filter out all the tags (sarcasm, math, physics, ...) they can have it - but the way how things work now is that a lot of important information just gets censored by stupid people everywhere. Just hiding information from everybody is quite harmful - being seriously uninformed already killed a lot of people...

pixl97 20 minutes ago | parent [-]

One problem with tagging is how much false tagging bullshit happens. You can't trust the posters to put correct tags, but you cannot also expect malicious users not to put false tags to hide stories from others.

I've also always hated binary up/down voting systems. Slashdot had it better with meta moderation where you had a few options to choose from.

I suppose now with AI I could mock up a UI concept I call orange slice voting. Instead of a singular up/down vote, you get what looks like a orange sliced across its equator where each segment has a series of positive and negative vote options and the user gets one selection per post.

"I like this content", "I believe this is true", "Fits this thread", "Good post", and "Misinformation", "I don't like this content", "doesn't fit this thread", "etc"

These can be adjusted for a site as needed and gives more dimensions for people to search and filter by.