▲ | awongh 12 hours ago | |||||||||||||||||||||||||||||||
It can still be both- in the sense that once a precedent is set using the these additional ideological and geopolitical motivations as momentum, maybe there will be an appetite for further algorithm regulations. As a tech person who already understood the system, it's refreshing that I now often see the comment "I need to change my algorithm"- meaning, I can shape the parameters of what X/Twitter / Instagram/ YouTube / TikTok shows me in my feed. I think there's growing meta-awareness (that I see as comments within these platforms) that there is "healthy" content and that the apps themselves manipulate their user's behavior patterns. Hopefully there's momentum building that people perceive this as a public health issue. | ||||||||||||||||||||||||||||||||
▲ | wahnfrieden 12 hours ago | parent [-] | |||||||||||||||||||||||||||||||
These bans done for political purposes toward public consent for genocide (ie see ADL/AIPAC's "We have a big TikTok problem" leaked audio, and members of our own congress stating that this is what motivates the regulations) won't lead to greater freedoms over algorithms. It is the opposite direction - more state control over which algorithms its citizens are allowed to see The mental health angle of support for the bans is a way the change gets accepted by the public, which posters here are doing free work toward generating, not a motivating goal or direction for these or next regulations | ||||||||||||||||||||||||||||||||
|