| ▲ | degamad 2 days ago | |
While this is an interesting data point, the main thing it tells us is that when the algorithm has no information about your preferences, that it skews racist. This might be because, absent other information, the algorithm defaults to the "average" user's preferences. Or it might be evidence of intentional bias in the algorithm. The next piece of data we need is, if we take a new account, and only interact with non-Nazi accounts and content (e.g. EFF, Cory Doctorow, Human Rights Watch, Amnesty, AOC/Obama/Clinton etc), does the feed become filled with non-racist content, or is it still pushed? | ||
| ▲ | afavour 2 days ago | parent | next [-] | |
Or you can just leave the platform. We don’t always need to interrogate the exact reasons why something happens, we can just see it, document it, then go elsewhere. | ||
| ▲ | int_19h a day ago | parent | prev [-] | |
> This might be because, absent other information, the algorithm defaults to the "average" user's preferences. ... of people actively using Twitter, yes. Which is precisely the point. | ||