▲ | ktosobcy 5 days ago | ||||||||||||||||||||||||||||||||||
ERM, FB itself admited they made a research regarding emotional response to the content they show. FB/X modus operandi is keep as much people for as long possible glued to the screen. The most triggering content will awaken all those "keyboard wariors" to fight. So instead of seeing your friends and people you follow on there you would mostly see something that would affect you one way or another (hence proliferation of more and more extreme stuff). Google is going downhill but for different reasons - they also care only about investors bottomline but being the biggest ad-provider they don't care all that much if people spend time on google.com page or not. | |||||||||||||||||||||||||||||||||||
▲ | plopilop 5 days ago | parent [-] | ||||||||||||||||||||||||||||||||||
Oh, I know that strong emotions increase engagement, outrage being a prime candidate. I have also no issue believing that FB/TikTok/X etc aggressively engage in such tactics, e.g. [0]. But I am not aware of FB publicly acknowledging that they deliberately tune the algorithm to this effect, even though they carried some research on the effects of emotions on engagement (I would love to be proven wrong though). But admitting FB did publicly say they manipulate their users' emotions for engagement, and a law is passed preventing that. How do you assess that the new FB algorithm is not manipulating emotions for engagement? How do you enforce your law? If you are not allowed to create outrage, are you allowed to promote posts that expose politicians corruption? Where is the limit? Once again, I hate these algorithms. But we cannot regulate by saying "stop being evil", we need specific metrics, targets, objectives. A law too broad will ban Google as much as Facebook, and a law too narrow can be circumvented in many ways. [0] https://www.wsj.com/tech/facebook-algorithm-change-zuckerber... | |||||||||||||||||||||||||||||||||||
|