Remix.run Logo
Aurornis 5 hours ago

> Social media and the world were better places before algorithmic feeds took over everything

Some times I feel like I'm the only one who remembers how toxic places like Usenet, IRC, and internet forums were before Facebook. Either that, or people only remember the past of the internet through rose colored glasses.

Complain about algorithmic feeds all you want, but internet toxicity was rampant long before modern social media platforms came along. Some of the crazy conspiracy theories and hate-filled vitriol that filled usenet groups back in the day makes the modern Facebook news feed seem tame by comparison.

linguae 5 hours ago | parent | next [-]

I agree that there’s always been toxicity on the Internet, but I also feel it’s harder to avoid toxicity today since the cost of giving up algorithmic social media is greater than the cost of giving up Usenet, chat rooms, and forums.

In particular, I feel it’s much harder to disengage with Facebook than it is to disengage with other forms of social media. Most of my friends and acquaintances are on Facebook. I have thought about leaving Facebook due to the toxic recommendations from its feed, but it will be much harder for me to keep up with life events from my friends and acquaintances, and it would also be harder for me to share my own life events.

With that said, the degradation of Facebook’s feed has encouraged me to think of a long-term solution: replacing Facebook with newsletters sent occasionally with life updates. I could use Flickr for sharing photos. If my friends like my newsletters, I could try to convince them to set up similar newsletters, especially if I made software that made setting up such newsletters easy.

No ads, no algorithmic feeds, just HTML-based email.

eddythompson80 4 hours ago | parent | prev | next [-]

You’re absolutely right. Shocking, rage bait, sensational content was always there in social media long before algorithmic feeds. As a matter of fact “algorithmic feeds” were in a way always there it’s just that back in the day those “algorithms” were very simple (most watched/read/replies today, this week, this month. Longest, shortest, newest, oldest, etc)

I think the main thing algorithmic feeds did was present the toxicity as the norm, as opposed to it being a choice you make. Like I used to be part of a forum back in the early 2000s. Every few weeks the top most replied thread would be some rage bait, or sensational thread. those threads will keep getting pushed to the top and remain at the top of the forum for a while and grow very quickly as a ton of people keep replying and pushing it to the top. But you could easily see that everyone else is carrying on with their day. You ignore it and move on. You sort by newest or filter it out and you’re good. It was clear that this is a particular heated thread and you can avoid it. Also mods would often move it to a controversial sub forum (or lock it all together if they were heavy handed) So you sort of had to go out of your way to get there and then you would know that you are actively walking into a “controversial section” or “conspiracy” forum etc. It wasn’t viewed as normal. You were a crazy person if you kept linking and talking about that crazy place.

With algorithmic feeds, it’s the norm. You’re not seeking and getting to shady corners of the internet or subscribing to a crazy usenet newsgroup to feed your own interest in rage or follow a conspiracy. You are just going to Facebook or twitter or Reddit or YouTube homepage. Literally the most mainstream biggest companies in the US homepages. Just like every one else.

intended an hour ago | parent | prev | next [-]

Having moderated both PHP forums and SM sites, quantity is its own quality.

Not to mention we have adversaries to contend with now. I still remember seeing Palantir slides for sock puppet management tools way back in the day. That was the SOTA at one point. Today?

SM pushed connected humanity past a critical connected mass that Usenet and IRC never could.

csnover 4 hours ago | parent | prev [-]

You aren’t the only one who remembers. But in that time it was a self-selecting process. The problem with “the algorithm”, as I see it, is not that it increases the baseline toxicity of your average internet fuckwad (though I do think the algorithm, by seeking to increase engagement, also normalises antisocial behaviour more than a regular internet forum by rewarding it with more exposure, and in a gamified way that causes others to model that antisocial behaviour). Instead, it seems to me that it does two uniquely harmful things.

First, it automatically funnels people into information silos which are increasingly deep and narrow. On the old internet, one could silo themselves only to a limited extent; it would still be necessary to regularly interact with more mainstream people and ideas. Now, the algorithm “helpfully” filters out anything it decides a person would not be interested in—like information which might challenge their world view in any meaningful way. In the past, it was necessary to engage with at least some outside influences, which helped to mediate people’s most extreme beliefs. Today, the algorithm successfully proxies those interactions through alternative sources which do the work of repackaging them in a way that is guaranteed to reinforce, rather than challenge, a person’s unrealistic world view.

Many of these information silos are also built at least in part from disinformation, and many people caught in them would have never been exposed to that disinformation in the absence of the algorithm promoting it to them. In the days of Usenet, a person would have to get a recommendation from another human participant, or they would have to actively seek something out, to be exposed to it. Those natural guardrails are gone. Now, an algorithm programmed to maximise engagement is in charge of deciding what people see every day, and it’s different for every person.

Second, the algorithm pushes content without appropriate shared cultural context into faces of many people who will then misunderstand it. We each exist in separate social contexts with in-jokes, shorthands for communication, etc., but the algorithm doesn’t care about any of that, it only cares for engagement. So you end up with today’s “internet winner” who made some dumb joke that only their friend group would really understand, and it blows up because to an outsider it looks awful. The algorithm amplifies this to the feeds of more people who don’t have an appropriate context, using the engagement metric to prioritise it over other more salient content. Now half the world is expressing outrage over a misunderstanding—one which would probably never have happened if not for the algorithm boosting the message.

Because there is no Planet B, it is impossible to say whether things would be where they are today if everything were the same except without the algorithmic feed. (And, of course, nothing happens in a vacuum; if our society were already working well for most people, there would not be so much toxicity for the algorithm to find and exploit.) Perhaps the current state of the world was an inevitability once every unhinged person could find 10,000 of their closest friends who also believe that pi is exactly 3, and the algorithm only accelerated this process. But the available body of research leads me to conclude, like the OP, that the algorithm is uniquely bad. I would go so far as to suggest it may be a Great Filter level threat due to the way it enables widespread reality-splitting in a geographically dispersed way. (And if not the recommendation algorithm on its own, certainly the one that is combined with an LLM.)