▲ | solid_fuel 2 days ago | |||||||||||||||||||||||||
> I don't think it follows that something making money must do so by being harmful. My point isn't that it's automatically harmful, simply that there is a very strong incentive to protect the revenue. That makes it daunting to study these harms. > On the other hand The WSJ, Guardian, and other media outlets have published incorrect information on the same events. The primary method that people had to discover that this information was incorrect was social media. I agree with your point here too, and I don't think the solution is to completely stop or get rid of social media. But, the problem I see is there are tons of corners of social media where you can still see the original lies being repeated as if they are fact. In some spaces they get challenged, but in others they are echoed and repeated uncritically. That is what concerns me - long debunked rumors and lies that get repeated because they feel good. > If anything education is required to teach people to discuss opposing views without rising to anger or personal attacks. I think many people are actually capable of discussing opposing views without it becoming so inflammatory... in person. But algorithmic amplification online works against that and the strongest, loudest, quickest view tends to win in the attention landscape. My concern is that social media is lowering people's ability to discuss things calmly, because instead of a discussion amongst acquaintances everything is an argument is against strangers. And that creates a dynamic where people who come to argue are not arguing against just you, but against every position they think you hold. We presort our opponents into categories based on perceived allegiance and then attack the entire image, instead of debating the actual person. But I don't know if that can fixed behaviorally, because the challenge of social media is that the crowd is effectively infinite. The same arguments get repeated thousands of times, and there's not even a guarantee that the person you are arguing against is a real person and not just a paid employee, or a bot. That frustration builds into a froth because the debate never moves, it just repeats. | ||||||||||||||||||||||||||
▲ | Lerc 2 days ago | parent [-] | |||||||||||||||||||||||||
>My point isn't that it's automatically harmful, simply that there is a very strong incentive to protect the revenue. That makes it daunting to study these harms. The problem is that having an incentive to hide harms is being used as evidence for the harm, whether it exists or not. Surely the same argument could be applied that companies would be incentivised to make a product that was non-harmful over one that was harmful. Harming your users seems counterproductive at least to some extent. I don't think it is a given that a harmful approach is the most profitable. | ||||||||||||||||||||||||||
|