▲ | wahnfrieden 12 hours ago | ||||||||||||||||
These bans done for political purposes toward public consent for genocide (ie see ADL/AIPAC's "We have a big TikTok problem" leaked audio, and members of our own congress stating that this is what motivates the regulations) won't lead to greater freedoms over algorithms. It is the opposite direction - more state control over which algorithms its citizens are allowed to see The mental health angle of support for the bans is a way the change gets accepted by the public, which posters here are doing free work toward generating, not a motivating goal or direction for these or next regulations | |||||||||||||||||
▲ | JumpCrisscross 12 hours ago | parent | next [-] | ||||||||||||||||
> bans done for political purposes You want a political body to make decisions apolitically? > mental health angle of support This was de minimis. The support was start to finish from national security angles. There was some cherry-on-top AIPAC and protectionist talk. But the votes were got because TikTok kept lying about serious stuff [1] while Russia reminded the world of the cost of appeasement. [1] https://www.blackburn.senate.gov/services/files/76E769A8-3ED... | |||||||||||||||||
| |||||||||||||||||
▲ | awongh 12 hours ago | parent | prev [-] | ||||||||||||||||
Yea, it might be naive to think the government will act in the interest of the consumer (although it has happened before)- but at least maybe it'll continue the conversation of users themselves.... THis situation is another data point and is a net good for society (whether or not the ban sticks). Discussion around (for example) the technical implementation of content moderation being inherently political (i.e., Meta and Twitter) will be good for everyone. |