| ▲ | nomdep 2 days ago |
| The algorithm is a mirror: it show more of what you interact with. You see “shady content” because you pay attention to it. But you can also follow people and read only what they write, reply to them, and write yourself. |
|
| ▲ | edent 2 days ago | parent | next [-] |
| That isn't true. I signed up for a fresh account for a project I was working on. Despite following no-one and not having interacted with anything, all I was pushed were racists, bigots, and extremist political content. Oh, and the owner's account. |
| |
| ▲ | degamad 2 days ago | parent [-] | | While this is an interesting data point, the main thing it tells us is that when the algorithm has no information about your preferences, that it skews racist. This might be because, absent other information, the algorithm defaults to the "average" user's preferences. Or it might be evidence of intentional bias in the algorithm. The next piece of data we need is, if we take a new account, and only interact with non-Nazi accounts and content (e.g. EFF, Cory Doctorow, Human Rights Watch, Amnesty, AOC/Obama/Clinton etc), does the feed become filled with non-racist content, or is it still pushed? | | |
| ▲ | afavour 2 days ago | parent | next [-] | | Or you can just leave the platform. We don’t always need to interrogate the exact reasons why something happens, we can just see it, document it, then go elsewhere. | |
| ▲ | int_19h a day ago | parent | prev [-] | | > This might be because, absent other information, the algorithm defaults to the "average" user's preferences. ... of people actively using Twitter, yes. Which is precisely the point. |
|
|
|
| ▲ | btown 2 days ago | parent | prev | next [-] |
| Even if you believe that Musk and team don’t “touch the scales” of the algorithm, the inevitable consequence of the decision to prioritize comments of people willing to pay for blue checks, is to discourage users not in that segment from engagement at all levels. The resulting shift in attention data naturally propagates to weight the input to the algorithm away from “what does an average user pay attention to” and more towards “what does a paying user pay attention to.” Setting morality aside, this is a self-consistent, if IMO short-sighted, business goal. What it is not is a way to create a fair and impartial “mirror” as you have described. |
|
| ▲ | toyg 2 days ago | parent | prev | next [-] |
| The discussion over X is always the same: "It's gone to hell" "No, it just reflects your tastes" "That's objectively false: create a new account and see what happens." "..." |
| |
| ▲ | gertop 2 days ago | parent | next [-] | | The same can be said of bluesky. In fact I think that you've said it yourself and recommended that people stick to manually curated follows! | | |
| ▲ | dutchCourage 2 days ago | parent | next [-] | | I think it's good advice, the main difference is that Bsky encourages you to do that by giving you the possibility to customize your feeds (and set whatever as the default). You can have a combination of personal lists and custom algorithmic feeds (your own or someone else's). Even ignoring musk's takeover, I think it's a better model that reduces doomscrolling, ragebait and generally low quality interactions. | |
| ▲ | toyg 2 days ago | parent | prev [-] | | uh, where...? |
| |
| ▲ | chairmansteve 2 days ago | parent | prev [-] | | [flagged] |
|
|
| ▲ | MrOrelliOReilly 2 days ago | parent | prev | next [-] |
| I find this a bit disingenuous. If I visit a buffet looking for a healthy snack, but 90% of the dishes are fast food, then I'll probably spend a lot of time looking through the fast food, and may even eat some as the best worst option. Similarly, I have found the overall content pool to have significantly worsened since Musk's takeover. The algorithm keeps serving me trash. It doesn't mean I want trash. |
| |
| ▲ | cloverich 2 days ago | parent [-] | | You can take your analogy further. The buffet noticed you pausing on unhealthy food, and begins replacing all the healthy options with unhealthy options. People shame your criticisms and note you could easily put blinders on and intentionally look longer at healthy options anytime you accidentally glance at an unhealthy one. the alternative would be an absolute repression of free speech after all. |
|
|
| ▲ | redman25 2 days ago | parent | prev | next [-] |
| A whole lot of machine learning practitioners use X. Makes it difficult to avoid if you're interested in the news. It's definitely a network effect issue. |
| |
|
| ▲ | thrance 2 days ago | parent | prev [-] |
| Open a private tab, navigate to x.com. All you see are heinous neonazis casually discussing the jewish question and fantasizing about race wars. |
| |
| ▲ | nalak 2 days ago | parent | next [-] | | If you do that all you get is a login wall. Have you actually done this or is this what you imagine it to be? | | |
| ▲ | gloflo 2 days ago | parent | next [-] | | Well, I can confirm that this is the case with a brand new account. | |
| ▲ | GrinningFool 2 days ago | parent | prev [-] | | I created an account, picked "pets" as my interest. I was suggested several pet-related accounts to follow, and followed none. I went to the home page and "for you" was populated about 80% from known right accounts and angry right-flavored screeds from people I didn't recognize. The other 20% was just a smattering of random, normal stuff. None of it about pets. |
| |
| ▲ | 2 days ago | parent | prev [-] | | [deleted] |
|