▲ | plopilop 5 days ago | |||||||||||||||||||||||||
Oh, I know that strong emotions increase engagement, outrage being a prime candidate. I have also no issue believing that FB/TikTok/X etc aggressively engage in such tactics, e.g. [0]. But I am not aware of FB publicly acknowledging that they deliberately tune the algorithm to this effect, even though they carried some research on the effects of emotions on engagement (I would love to be proven wrong though). But admitting FB did publicly say they manipulate their users' emotions for engagement, and a law is passed preventing that. How do you assess that the new FB algorithm is not manipulating emotions for engagement? How do you enforce your law? If you are not allowed to create outrage, are you allowed to promote posts that expose politicians corruption? Where is the limit? Once again, I hate these algorithms. But we cannot regulate by saying "stop being evil", we need specific metrics, targets, objectives. A law too broad will ban Google as much as Facebook, and a law too narrow can be circumvented in many ways. [0] https://www.wsj.com/tech/facebook-algorithm-change-zuckerber... | ||||||||||||||||||||||||||
▲ | mschuster91 4 days ago | parent [-] | |||||||||||||||||||||||||
> But we cannot regulate by saying "stop being evil", we need specific metrics, targets, objectives. Ban any kind of provider-defined feed that is not chronological and does not include content of users the user does not follow, with the exception for clearly marked as-such advertising. Easy to write as a law, even easier to verify compliance. | ||||||||||||||||||||||||||
|