| ▲ | ranyume 14 hours ago | |||||||
I'd say that at minimum social networks need to be required to show how their algorithm works and allow users control over their data. They must be able to know why a content was served to them. Nowadays social networks are so pervasive in society, affecting it and molding it to unknown interests, that this is the bare minimum for a free society. Ideally, users should be able to modify the algorithm, so they can get just what they want, while simultaneously maximizing free speech. If something isn't illegal, it shouldn't be hidden or removed. | ||||||||
| ▲ | drnick1 5 hours ago | parent | next [-] | |||||||
> Nowadays social networks are so pervasive in society, affecting it and molding it to unknown interests I think this is the real issue. We should free ourselves from "social networks" such as Tiktok, Facebook, Instagram and others. Even with direct messages truly E2EE, they create countless other privacy problems. They enable surveillance of people at scale and should be completely shunned for that reason alone. | ||||||||
| ▲ | acuozzo 14 hours ago | parent | prev [-] | |||||||
> social networks need to be required to show how their algorithm works Hypothetically speaking: What if it's a neural network in which each user has his/her own unique weights which are undergoing frequent retraining? Would it not be an undue burden to necessitate the release of the weights every time they change? Also, what value would the weights have? We haven't yet hit the point of having neural networks with interpretability. Wouldn't enforcing algorithmic interpretability additionally be an undue burden? > They must be able to know why a content was served to them. What if the authors of the code are unable to tell you why? | ||||||||
| ||||||||