| ▲ | Lerc 3 days ago |
| Part of me thinks that if the case against social media was stronger, it would not be being litigated on substack. A lot of things suck right now. Social media definitely give us the ability to see that. Using your personal ideology to link correlations is not the same thing as finding causation. There will be undoubtedly be some damaging aspects of social media, simply because it is large and complex. It would be highly unlikely that all those factors always aligned in the direction of good. All too often a collection of cherry picked studies are presented in books targeting the worried public. It can build a public opinion that is at odds with the data. Some people write books just to express their ideas. Others like Jonathan Haidt seem to think that putting their efforts into convincing as many people as possible of their ideology is preferable to putting effort into demonstrating that their ideas are true. There is this growing notion that perception is reality, convince enough people and it is true. I am prepared to accept aspects of social media are bad. Clearly identify why and how and perhaps we can make progress addressing each thing. Declaring it's all bad acts as a deterrent to removing faults. I become very sceptical when many disparate threads of the same thing seem to coincidentally turn out to be bad. That suggests either there is an underlying reason that has been left unstated and unproven or the information I have been presented with is selective. |
|
| ▲ | Llamamoe 3 days ago | parent | next [-] |
| I feel like regardless of all else, the fact of algorithmic curation is going to be bad, especially when it's contaminated by corporate and/or political interests. We have evolved to parse information as if its prevalence is controlled by how much people talk about it, how acceptable opinions are to voice, how others react to them. Algorithmic social media intrinsically destroy that. They change how information spreads, but not how we parse its spread. It's parasocial at best, and very possibly far worse at worst. |
| |
| ▲ | armchairhacker 2 days ago | parent | next [-] | | No doubt the specific algorithms used by social media companies are bad. But what is "non-algorithmic" curation? Chronological order: promotes spam, which will be mostly paid actors. Manual curation by "high-quality, trusted" curators: who are they, and how will they find content? Curation by friends and locals: this is probably an improvement over what we have now, but it's still dominated by friends and locals who are more outspoken and charismatic; moreover, it's hard to maintain, because curious people will try going outside their community, especially those who are outcasts. EDIT: Also, studies have shown people focus more on negative (https://en.wikipedia.org/wiki/Negativity_bias) and sensational (https://en.wikipedia.org/wiki/Salience_(neuroscience)#Salien...) things (and thus post/upvote/view them more), so an algorithm that doesn't explicitly push negativity and sensationalism may appear to. | | |
| ▲ | rightbyte 2 days ago | parent | next [-] | | > Chronological order: promotes spam, which will be mostly paid actors. If users chose who to follow this is hardly a problem. Also classical forums dealt with spam just fine. | | |
| ▲ | squigz 2 days ago | parent | next [-] | | > Also classical forums dealt with spam just fine. Err... well, no, it was always a big problem, still is, and is made even more so by the technology of our day. | | |
| ▲ | doctor_blood 2 days ago | parent [-] | | Not really? On something like Xenforo2, there's a setting that makes a new account's posts invisible until that account is manually approved by a mod - in conjunction with the spam prevention tools - https://xenforo.com/docs/xf2/spam/#content - we really don't need to do much work. Because all new accounts need to be verified by an actual human, we can filter out 99% of spam before other users see it, and between a dozen mods for a community of 140k people we only need to spend ~15 minutes a week cleaning out spam. | | |
| ▲ | nradov 2 days ago | parent [-] | | So then you end up with power tripping mods who abuse their position to push certain narratives. In some cases we've even seen foreign governments paying mods on popular sites such as Reddit to push their propaganda. | | |
| ▲ | mid-kid 2 days ago | parent | next [-] | | You mean like how the current twitter owner tweaks the algorithm to push his narrative? This is why there was never one big forum, and there never should've been. | |
| ▲ | camgunz 2 days ago | parent | prev [-] | | This is a problem with centralization, not with mods. |
|
|
| |
| ▲ | armchairhacker 2 days ago | parent | prev [-] | | How will users choose who to follow? This was a real problem when I tried Mastodon/Lemmy/Bluesky, I saw lots of chronological posts but none of them were interesting. Unfortunately, classical forums may have dealt with spam better because there were less people online back then. Classical forums that exist today have mitigations and/or are overrun with spam. | | |
| ▲ | camgunz 2 days ago | parent [-] | | What used to happen is there would be human-powered networks ("if you like me, check out X/Y/Z"), rather than algorithm-powered networks. Sadly, the existence and dominance of algorithm-powered networks has withered humans' networking muscle. We can probably build it back though. |
|
| |
| ▲ | wkat4242 2 days ago | parent | prev | next [-] | | > Also, studies have shown people focus more on negative (https://en.wikipedia.org/wiki/Negativity_bias) and sensational (https://en.wikipedia.org/wiki/Salience_(neuroscience)#Salien...) things (and thus post/upvote/view them more), so an algorithm that doesn't explicitly push negativity and sensationalism may appear to. This is exactly why it's a problem. It doesn't even matter whether the algorithm is trained specifically on negative content. The result is the same: negative content is promoted more because it sees more engagement. The result is more discontent in society, people are constantly angry about something. Anger makes a reasonable discussion impossible which in turn causes polarisation and extremes in society and politics. What we're seeing all over the world. And the user sourced content is a problem too because it can be used for anyone to run manipulation campaigns. At least with traditional media there was an editor who would make sure fact checking was done. The social media platforms don't stand for the content they publish. | | |
| ▲ | nradov 2 days ago | parent | next [-] | | Fact checking with traditional media was always pretty spotty. Even supposedly high quality publications like the NY Times frequently reported fake news. | |
| ▲ | bluGill 2 days ago | parent | prev [-] | | It isn't just social media. I'm been identified as a republican and the pervious owners of my house democrats, and since forwardinu has expired I get their 'spam' mail. There names are different, but otherwise the mail from each party is exactly the same 'donate now to stop [other parties'] evil ageneda. they know outrage works and lean into it. |
| |
| ▲ | mikewarot 2 days ago | parent | prev | next [-] | | I've been curating my own feeds manually for decades now. I choose who to follow, and actively seek out methods of social media use that are strictly based on my selections and show things in reverse chronological order. Even Facebook can do thus with the right URL if you use it via the web[1]. You start with almost nothing on a given platform but over time you build up a wide variety of sources that you can continue to monitor for quality and predictive power over time. [1] https://www.facebook.com/?sk=h_chr | |
| ▲ | pyrale 2 days ago | parent | prev [-] | | > But what is "non-algorithmic" curation? Message boards have existed for a very long time, maybe you're too young to remember, but the questions you're raising have very obvious answers. They're not without issues, but they have a strong benefit: everyone sees the same thing. |
| |
| ▲ | Lerc 2 days ago | parent | prev | next [-] | | I have wondered if it's not algorithmic curation per-se that is the problem, but personalised algorithmic curation. When each person is receiving a personalised feed, there is a significant loss of common experience. You are not seeing what others are seeing and that creates a loss of a basis of communication. I have considered the possibility that the solution might be to enable many areas of curation but in each domain the thing people see is the same for everyone. In essence, subreddits. The problem then becomes the nature of the curators, subreddits show that human curators are also not ideal. Is there an opportunity for public algorithm curation. You subscribe to the algorithm itself and see the same thing as everyone else who subscribes sees. The curation is neutral (but will be subject to gaming, the fight against bad actors will be perpetual in all areas). I agree about the tendency for the prevalence of conversation to influence individuals, but I think it can be resisted. I don't think humans live their lives controlled by their base instincts, most learn to find a better way. It is part of why I do not like the idea of de-platforming. I found it quite instructional when Jon Stewart did an in-depth piece on trans issues. It made an extremely good argument, but it infuriated me to see a few days later so many people talking about how great it was because Jon agreed with them and he reaches so many people. They completely missed the point. The reason it was good is because it made a good case. This cynical "It's good if it reaches the conclusion we want and lots of people" is what is destroying us. Once you feel like it is not necessary to make your case, but just shout the loudest, you lose the ability to win over people who disagree because they don't like you shouting and you haven't made your case. | | |
| ▲ | Llamamoe 2 days ago | parent [-] | | > the solution might be to enable many areas of curation but in each domain the thing people see is the same for everyone. Doesn't this already happen to some extent, with content being classified into advertiser-friendly bins and people's feeds being populated primarily by top content from within the bins the algorithm deems they have an interest in? > Once you feel like it is not necessary to make your case, but just shout the loudest, you lose the ability to win over people who disagree because they don't like you shouting and you haven't made your case. To some extent, this is how human communication always worked. I think the biggest problem is that the digital version of it is sufficiently different from the natural one, and sufficiently influenceable by popular and/or powerful actors, that it enables very pathological outcomes. | | |
| ▲ | Lerc a day ago | parent [-] | | >Doesn't this already happen to some extent, with content being classified into advertiser-friendly bins and people's feeds being populated primarily by top content from within the bins the algorithm deems they have an interest in? The distinction I think would be with publicly disclosable algorithms, you would at least know why you were receiving a particular thing and have the option to not subscribe to that particular algorithm. Ideally such things would be properly, open source. Once public, algorithms are subject to gaming. Open source provides the many eyes and feedback required to stay ahead of the bad actors. |
|
| |
| ▲ | enaaem 2 days ago | parent | prev [-] | | Social media should be liable for the content that their automatic curation put forward. If a telecom company actively gives your number to scammers to call you up, they should not hide behind the argument that it is not them scamming you, but someone else. Applying regular anti-fraud and defamation laws will probably put an end to algorithmic curation. |
|
|
| ▲ | procaryote 2 days ago | parent | prev | next [-] |
| > Part of me thinks that if the case against social media was stronger, it would not be being litigated on substack. It's litigated all over and has been for a decade. Australia for example has set an age limit of 16 to have social media. France 15. Schools or countries are trying various phone bans. There's research into it. There are whistleblowers telling about Facebook's own research they've suppressed as it would show some of their harm. Perhaps you spend too much time on social media? |
| |
| ▲ | Lerc 2 days ago | parent | next [-] | | I am aware that laws have been passed on a wide range of issues against expert advice. Whether it be protecting the right to assault children, punishing addicts instead of preventing harm, or cutting children off from their most used method of first contact with mental health-care Since you bring up the Australian law as an example I shall check the expert opinion on that. For the second time in a week, I find myself in the peculiar position of seeing our research misinterpreted and used to support specific (and ill-advised) policy - this time by the Australian government to justify a blanket social media ban for those under 16. https://www.linkedin.com/posts/akprzybylski_the-communicatio... This open-letter, signed by over 140 Australian academics, international experts, and civil society organisations, addresses the proposal to ‘ban’ children from social media until the age of 16. They argue that a ‘ban’ is too blunt an instrument to address risks effectively and that any restrictions must be designed with care. https://apo.org.au/node/328608 https://ccyp.wa.gov.au/news/anzccga-joint-statement-on-the-s... https://humanrights.gov.au/about/news/proposed-social-media-... | |
| ▲ | zarzavat 2 days ago | parent | prev | next [-] | | > set an age limit of 16 to have social media This just shows how futile it is. How do you actually stop someone from using social media? If a 15 year old signs up for Mastodon what is Australia going to do about it? | | |
| ▲ | procaryote 2 days ago | parent [-] | | I'm guessing it's mostly useful as a guide for parents, but I haven't seen any hard data It shows it's not just a debate on substack though | | |
| ▲ | danhau 2 days ago | parent [-] | | Indeed. I think most phones already have some kind of parental control. I know Apple devices do. With screen time you can limit your kids social media use. Shouldn‘t be rocket science to ban those apps automatically, if that isn‘t already possible. OS vendors could use that to implement the country specific bans outright. This does require though, that parents set up their kids‘ phones correctly. |
|
| |
| ▲ | pembrook 2 days ago | parent | prev [-] | | You’re strengthening OP’s point instead of undermining it. The “some governments banned it for kids” argument is an appeal to authority, a logical fallacy. The actions of tech-reactionist leftist governments absolutely do not constitute sound science or evidence in this matter. And if you’re claiming the French government only makes government policy based on sound data, I will point you to their currently unraveling government over the mathematically impossible social pension scheme they’ve created. | | |
| ▲ | procaryote 2 days ago | parent | next [-] | | Responding to the point "it's [only] litigated on substack", things like government bans are relevant counter-examples The bans might be unfounded or well founded, you might agree with them or not, but clearly the idea that social media might be bad has spread beyond substack | | |
| ▲ | Lerc 2 days ago | parent [-] | | At no point did I have your inserted [only] in my mind when I wrote that. I certainly do think the idea that social media might be bad has spread far and wide. What I would like to see is experts in the field reaching a consensus on to what extent that idea is true, and what should be done about it. https://www.nature.com/articles/d41586-024-00902-2#ref-CR6 It should be noted a lot of ideas have spread in recent years. We would do well to not believe all of them, no matter how comforting it is to externalize blame. | | |
| ▲ | procaryote 16 hours ago | parent [-] | | My apologies for misreading you then. I didn't expect you to object specifically to it being debated on substack in addition to by scientists, regulators etc. How is that a signal the case against social media is weak? |
|
| |
| ▲ | throw4847285 2 days ago | parent | prev [-] | | Your argument contains the fallacy fallacy, a logical fallacy in which one wrongly cites an informal fallacy in order to discredit a valid argument. The actions of several democratic governments is evidence that there is enough popular support for these actions to argue for a broader trend. And before you try for a gotcha, I am well aware that a democratic government can enact regulations without a direct vote proving that a majority of people support such an action. But inasmuch as a government reflects the will of the governed, etc etc etc. | | |
| ▲ | pembrook 2 days ago | parent [-] | | Huh? Claiming something is true because a government supports it, is quite possibly the most cut-and-dry definition of an appeal to authority I've ever seen. | | |
| ▲ | dcow 2 days ago | parent | next [-] | | Governments aren’t banning or restricting it because “god said it was bad”. Nor is the GGGP arguing that we should take it seriously because governments do so. Those would be specific appeals to authority. The GGGP argument uses examples of cases where social media has been taken seriously enough to result in government regulation to directly rebut the GGGGP’s claim that social media is only being discussed on substack and not more broadly. | |
| ▲ | Chris2048 2 days ago | parent | prev [-] | | "several democratic governments" |
|
|
|
|
|
| ▲ | majormajor 3 days ago | parent | prev | next [-] |
| It's increasingly discussed in traditional media too so let's toss out that first line glib dismissal. More and more people declaring it's net-negative is the first step towards changing anything. Academic "let's evaluate each individual point about it on its own merits" is not how this sort of thing finds political momentum. (Or we could argue that "social media" in the Facebook-era sense is just one part of a larger entity, "the internet," that we're singling out.) |
| |
| ▲ | delusional 3 days ago | parent | next [-] | | > More and more people declaring it's net-negative is the first step towards changing anything. I accept that "net-negative" is a cultural shorthand, but I really wish we could go beyond it. I don't think people are suddenly looking at both sides of the equation and evaluating rationally that their social media interactions are net negative. I think what's happening is a change in the novelty of social media. That is, the the net value is changing. Originally, social media was fun and novel, but once that novelty wears away it's flat and lifeless. It's sort of abstractly interesting to discuss tech with likeminded people on HN, but once we get past the novelty, I don't know any of you. Behind the screen-names is a sea of un-identifiable faces that I have to assume are like-minded to have any interesting discussions with, but which are most certainly not like me at all. Its endless discussions with people who don't care. I think that's what you're seeing. A society caught up in the novelty, losing that naive enjoyment. Not a realization of met effects. | |
| ▲ | logicchains 3 days ago | parent | prev | next [-] | | >It's increasingly discussed in traditional media too so let's toss out that first line glib dismissal. Traditional media is the absolute worst possible source for anything related to social media because of the extreme conflict of interest. Decentralised media is a fundamental threat to the business model of centralised media, so of course most of the coverage of social media in traditional media will be negative. | | |
| ▲ | Theodores 2 days ago | parent | next [-] | | I wish to quibble with you on this as there is a love/hate relationship between the conventional media and social media. The mainstream media have several sources, including the press releases that get sent to them, the newswires they get their main news from and social media. In the UK the press, in particular, the BBC, were early adopters of Twitter. Most of the population would not have heard of it had it not been for the journalists at the BBC. The journalists thought it was the best thing since the invention of the printing press. Latterly Instagram has become an equally useful source to them and, since Twitter became X, there is less copying and pasting tweets. The current U.S. President seems capable of dictatorship via social media, so following his messages on social media is what the press do. I doubt any journalist has been on whitehouse.gov for a long time, the regular web and regular sources have been demoted. | |
| ▲ | alisonatwork 2 days ago | parent | prev | next [-] | | Unfortunately most of what people understand as "social media" is not decentralized, and most of the biggest names on Substack in particular come directly out of "traditional media", which is exactly why it's not a real alternative. Substack is just another newspaper except now readers have to pay for every section they want to read. | | |
| ▲ | bluebarbet 2 days ago | parent [-] | | The difference between traditional and social media is not just technical. Traditional media hosts a profession (journalism) with a code of ethics, editorial oversight, minimal standards, a mission of truth-seeking. It's easy to be cynical but those things have generally served us well. The Substack jungle is not a good replacement. | | |
| ▲ | ivewonyoung 2 days ago | parent | next [-] | | > Traditional media hosts a profession (journalism) with a code of ethics, editorial oversight, minimal standards, a mission of truth-seeking Which traditional media outlets follow those things nowadays? Genuine question, looking for information and news to consume. | | |
| ▲ | jpalawaga 2 days ago | parent [-] | | almost all of the major ones? voices on the internet have lead people to believe that the journalists at major publications are biased, and that somehow also means they're lying and unethical. what's interesting is that those opinions are taken at face value without ever happening to do any practical evaluation about traditional media outlets. the reality is, if you ever read any alt-news publication it becomes evident extremely quickly how deprived of any standards those publications actually are. |
| |
| ▲ | paganel 2 days ago | parent | prev [-] | | Yes, they get paid to spill out stuff that materially benefits those that do the paying, there’s another name for that that I won’t use on a Sunday. |
|
| |
| ▲ | grammarnazzzi 2 days ago | parent | prev [-] | | [dead] |
| |
| ▲ | krapp 3 days ago | parent | prev | next [-] | | "net-negative" sounds like a rigidly defined mathematically derived result but it's basically just a vibe that means "I hate social media more than I like it." | | |
| ▲ | sedawkgrep 2 days ago | parent [-] | | I'm struggling to understand your point, especially since the conclusion you posit is rather glib and dismissive. Net-negative is not quantifiable. But it is definitely qualifiable. I don't think you have to think of things in terms of "hate it more than I like it" when you have actual examples on social media of children posting self-harm and suicide, hooliganism and outright crimes posted for viewership, blatant misinformation proliferation, and the unbelievable broad and deep affect powerful entities can have on public information/opinion through SM. I think we can agree all of these are bad, and a net-negative, without needing any mathematic rigor. | | |
| ▲ | krapp 2 days ago | parent [-] | | My point is that "More and more people declaring social media net-negative" doesn't mean anything, and it certainly isn't a valid "first step towards changing anything" because it isn't actionable. >I don't think you have to think of things in terms of "hate it more than I like it" when you have actual examples on social media of children posting self-harm and suicide, hooliganism and outright crimes posted for viewership, blatant misinformation proliferation, and the unbelievable broad and deep affect powerful entities can have on public information/opinion through SM. Sure, and then there's plenty of children not posting self-harm and suicide, hooliganism and outright crimes posted for viewership, and plenty of information and perfectly normal, non-harmful communication and interaction. "net-negative" implies there is far more harmful content than non-harmful, and that most people using social media are using it in a negative way, which seems more like a bias than anything proven. I can agree that there are harmful and negative aspects of social media without agreeing that the majority of social media content and usage is harmful and negative. | | |
| ▲ | sedawkgrep 2 days ago | parent [-] | | While I appreciate the idea that moving without factual data is often detrimental (which is what I believe you're implying here), I don't share the opinion that SM deserves any benefit of the doubt. I'm old enough to have lived as an adult pre-SM, and from my perspective the overwhelming impact of social media has been more inflammatory, degrading, divisive, etc., etc., etc., than whatever positives you think you're getting. A family friend's teenage daughter isn't allowed a cell phone, and thus has zero presence or view into SM spaces. Unlike nearly all her friends, she doesn't suffer from depression, anxiety, or any other common malady that is so prevalent today with the youth. Yes it's anecdotal, but it's also stark. We got along just fine before SM, and we'd be just fine again without it. | | |
| ▲ | krapp 2 days ago | parent [-] | | That's just your perspective, based on the fact that controversy makes headlines and normality doesn't. One might conclude based on headlines and populist political rhetoric that the US is a crime-filled hellhole, awash in gang violence and illegal aliens swarming over the border raping and pillaging and burning entire cities to the ground, whereas in reality crime is lower than it has been for years. Perceptions created by the media aren't always accurate, and "social media is a cancer" is absolutely a media-driven narrative. Remember when TikTok was a CCP mind-control weapon turning our children into sleeper agents? When Twitter was threat to the very existence of Western democracy that controlled human speech and could topple governments at will? The vast Marxist conspiracy behind all social media that rigged elections for the DNC? The louder such narratives become, the more we should question the motives of whomever holds the bullhorn. A lot of people using social media aren't teenagers. A lot of teenagers are depressed and anxious for reasons other than using social media. A lot of teenagers use social media and aren't depressed and anxious because of it. A lot of teenagers find community and support for their issues through social media. Your extrapolation from a sample size of "one teenage girl and her friends that I'm aware of" to the billions of people currently using social media, and your conclusion that social media is responsible for all of the maladies common to youth doesn't really mean much. | | |
| ▲ | sedawkgrep 2 days ago | parent [-] | | Your first paragraph is just as applicable to social media as it is to traditional media...possibly moreso. So claiming that the media lies or deceives and shouldn't be believed does not lend credence to anything you're saying. When you say "media-driven narrative", where do you think that's coming from? I probably see 10x social media to traditional media and it's all over the place. So it's not the old guard barking at the new. The reality is social media today lacks most of the rigor and accuracy that traditional media needed to be trustworthy. There's virtually no vested interest in anyone on social media being honest and forthright about anything. Your second paragraph is simply your perspective (and full of broad statements), and like you say, your opinion on that matter doesn't mean any more to me than apparently mine to you. Yet here we are, with more depression, anxiety, and civil unrest nationally than we've had since probably Vietnam. At least all that unrest is what I see predominantly on SM. |
|
|
|
|
| |
| ▲ | paganel 2 days ago | parent | prev | next [-] | | What’a being discussed in the traditional media has no value anymore because it’s a dead medium, inhabited by dinosaurs. | |
| ▲ | Lerc 2 days ago | parent | prev [-] | | I did not consider it a glib dismissal, and I would not consider traditional media an appropriate avenue to litigate this either. Trial by media is a term used to describe something that generally think shouldn't occur. The appropriate place to find out what is and isn't true is research. Do research, write papers, discuss results, resolve contradictions in findings, reach consensus. The media should not be deciding what is true, they should be reporting what they see. Importantly they should make clear that the existence of a thing is not the same thing as the prevalence of a thing. >Academic "let's evaluate each individual point about it on its own merits" is not how this sort of thing finds political momentum. I think much of my post was in effect saying that a good deal of the problem is the belief that building political momentum is more important than accuracy. | | |
| ▲ | alwa 2 days ago | parent | next [-] | | Weren’t you, in your initial post, suspicious that the research process was settling on a pessimistic consensus view? Figuring that, because most every formal study is coming up negative (or “no effect supported”), it must be that the research is selective and designed to manipulate? And that a phenomenon can’t exhibit a diversity of uniformly bad effects without “an underlying reason that has been left unstated and unproven”? I don’t know how I’d state or prove a single underlying reason why most vices are attractive-while-corrosive and still, on the whole, bad. It feels like priests and philosophers have tried for the whole human era to articulate a unified theory of exactly why, for example, “vanity is bad”. But I’m still comfortable saying gambling feels good and breaks material security, lust feels good and breaks contentment (and sometimes relationships), and social media feels good and breaks spirits. I certainly agree that “social media” feels uncomfortably imprecise as a category—shorthand for individualized feeds, incentives toward vain behavior, gambling-like reinforcement, ephemerality over structure, decontextualization, disindividuation, and so on; as well as potentially nice things like “seeing mom’s vacation pics.” If we were to accept that social media in its modern form, like other vices, “feels good in the short term and selectively stokes one’s ego,” would that be enough of a positive side to accept the possibility for uniformly negative long-run effects? For that matter, and this is very possible—is there a substantial body of research drawing positive conclusions that I’m not familiar with? | |
| ▲ | non_aligned 2 days ago | parent | prev | next [-] | | > The appropriate place to find out what is and isn't true is research. Do research, write papers, discuss results, resolve contradictions in findings, reach consensus. Few hot-button social issues are resolved via research, and I'm not sure they should be. On many divisive issues in social sciences, having a PhD doesn't shield you from working back from what you think the outcome ought to be, so political preferences become a pretty reliable predictor of published results. The consensus you get that way can be pretty shoddy too. More importantly, a lot of it involves complex moral judgments that can't really be reduced to formulas. For example, let's say that on average, social media doesn't make teen suicides significantly more frequent. But are OK with any number of teens killing themselves because of Instagram? Many people might categorically reject this for reasons that can't be dissected in utilitarian terms. That's just humanity. | |
| ▲ | autoexec 2 days ago | parent | prev | next [-] | | > The media should not be deciding what is true, they should be reporting what they see. Largely I don't think the media has been dictating anything. They've just been reporting on the growing body of evidence showing that social media is harmful. What you'd call "trial by media" is just spreading awareness and discussion of the evidence we have so far which seems like a very good thing. Social media moves faster than scientific consensus, and there's a long history of industry doing everything they can to slow that process down and muddy the waters. We've seen facebook doing exactly that already by burying child safety research. A decade or more of "Do thing, say nothing" is not a sound strategy when the alternative is letting the public know about the existing research we have showing real harms and letting them decide for themselves what steps to take on an individual level and what concerns to bring to their representatives who could decide policy to mitigate those harms or even dedicate funding to further study them. | |
| ▲ | TheOtherHobbes 2 days ago | parent | prev [-] | | There's plenty of research. Plenty. None of it is positive. Summaries with links here. https://socialmediavictims.org/effects-of-social-media/ It's really not hard to confirm this. The problem isn't that "building political momentum is more important than accuracy", it's that social media is a huge global industry that pumps out psychological, emotional, and political pollution. And like all major polluters, it has a very strong interest in denying what it's doing. | | |
| ▲ | genrilz 2 days ago | parent [-] | | There is plenty of research on social media outcomes. I've looked through some of it before by just searching semanticscholar.org, and the general consensus is that it has both positive and negative effects. I don't want to have to do a literature review again, and sharing papers is hard because they are often paywalled unless you are associated with a university or are willing to pirate them. Luckily, The American Psychological Association [0] has shared this nice health advisory [1] which goes into detail. The APA has stewarded psychology research and communicated it to the public in the US for a long time. They have a good track record. [0]: https://en.wikipedia.org/wiki/American_Psychological_Associa... [1]: https://www.apa.org/topics/social-media-internet/health-advi... |
|
|
|
|
| ▲ | ushiroda80 2 days ago | parent | prev | next [-] |
| I don't think reasoning needs to be that complex. Addictive things are harmful, social media is design to be addictive (and is increasing). There is a correlation of higher addictiveness with harm. Children in particular are vulnerable for addictive things. So given the above, the expectation for social media which is highly addictive is that it would be highly harmful, unless there are clear reasons that it's not harmful. |
|
| ▲ | Aunche 2 days ago | parent | prev | next [-] |
| > I am prepared to accept aspects of social media are bad. Clearly identify why and how and perhaps we can make progress addressing each thing. Companies intentionally design social media to be as addictive as possible, which should be enough to declare them as bad. Should we also identify each chemical in a vape and address each one individually as well before banning them for children? I think such a ban for social media would probably be overkill, but it should not be controversial to ban phone use in school. |
|
| ▲ | tempodox 2 days ago | parent | prev | next [-] |
| > I am prepared to accept aspects of social media are bad. Clearly identify why and how That has been done over and over again, but as long as law makers and regulators remain passive, nothing will improve. |
|
| ▲ | solid_fuel 3 days ago | parent | prev | next [-] |
| There a lot of money in social media, literally hundreds of billions of dollars. I expect the case against it will continue to grow, like the case against cigarettes did. I will say this, and this is anecdotal, but other events this week have been an excellent case study in how fast misinformation (charitably) and lies (uncharitably) spread across social media, and how much social media does to amp up the anger and tone of people. When I open Twitter, or Facebook, or Instagram, or any of the smaller networks I see people baying for blood. Quite literally. But when I talk to my friends, or look at how people are acting in the street, I don't see that. I don't see the absolute frenzy that I see online. If social media turns up the anger that much, I don't think it's worth the cost. |
| |
| ▲ | Lerc 2 days ago | parent | next [-] | | >There a lot of money in social media, literally hundreds of billions of dollars. I expect the case against it will continue to grow, like the case against cigarettes did. I don't think it follows that something making money must do so by being harmful. I do think strong regulation should exist to prevent businesses from introducing harmful behaviours to maximise profits, but to justify that opinion I have to believe that there is an ability to be profitable and ethical simultaneously. >events this week have been an excellent case study in how fast misinformation (charitably) and lies (uncharitably) spread across social media On the other hand The WSJ, Guardian, and other media outlets have published incorrect information on the same events. The primary method that people had to discover that this information was incorrect was social media. It's true that there was incorrect information and misinformation on social media, but it was also immediately challenged. That does create a source of conflict, but I don't think the solution is to accept falsehoods unchallenged. If anything education is required to teach people to discuss opposing views without rising to anger or personal attacks. | | |
| ▲ | solid_fuel 2 days ago | parent [-] | | > I don't think it follows that something making money must do so by being harmful. My point isn't that it's automatically harmful, simply that there is a very strong incentive to protect the revenue. That makes it daunting to study these harms. > On the other hand The WSJ, Guardian, and other media outlets have published incorrect information on the same events. The primary method that people had to discover that this information was incorrect was social media. I agree with your point here too, and I don't think the solution is to completely stop or get rid of social media. But, the problem I see is there are tons of corners of social media where you can still see the original lies being repeated as if they are fact. In some spaces they get challenged, but in others they are echoed and repeated uncritically. That is what concerns me - long debunked rumors and lies that get repeated because they feel good. > If anything education is required to teach people to discuss opposing views without rising to anger or personal attacks. I think many people are actually capable of discussing opposing views without it becoming so inflammatory... in person. But algorithmic amplification online works against that and the strongest, loudest, quickest view tends to win in the attention landscape. My concern is that social media is lowering people's ability to discuss things calmly, because instead of a discussion amongst acquaintances everything is an argument is against strangers. And that creates a dynamic where people who come to argue are not arguing against just you, but against every position they think you hold. We presort our opponents into categories based on perceived allegiance and then attack the entire image, instead of debating the actual person. But I don't know if that can fixed behaviorally, because the challenge of social media is that the crowd is effectively infinite. The same arguments get repeated thousands of times, and there's not even a guarantee that the person you are arguing against is a real person and not just a paid employee, or a bot. That frustration builds into a froth because the debate never moves, it just repeats. | | |
| ▲ | Lerc 2 days ago | parent [-] | | >My point isn't that it's automatically harmful, simply that there is a very strong incentive to protect the revenue. That makes it daunting to study these harms. The problem is that having an incentive to hide harms is being used as evidence for the harm, whether it exists or not. Surely the same argument could be applied that companies would be incentivised to make a product that was non-harmful over one that was harmful. Harming your users seems counterproductive at least to some extent. I don't think it is a given that a harmful approach is the most profitable. | | |
| ▲ | solid_fuel 2 days ago | parent [-] | | > The problem is that having an incentive to hide harms is being used as evidence for the harm, whether it exists or not. No, the incentive to hide harm is being given as a reason that studies into harm would be suppressed, not as evidence of harm in and of itself. This is a direct response to your original remark that "Part of me thinks that if the case against social media was stronger, it would not be being litigated on substack." Potential mechanisms and dynamics that cause harm are in the rest of my comment. > Harming your users seems counterproductive at least to some extent. Short term gains always take precedence. Cigarette companies knew about the harm of cigarettes and hid it for literally decades. [0] Fossil fuel companies have known about the danger of climate change for 100 years and hid it. [1] If you dig through history there are hundreds of examples of companies knowingly harming their users, and continuing to do so until they were forced to stop or went out of business. Look at the Sacklers and the opioid epidemic [2], hell, look at Radithor. [3] It is profitable to harm your users, as long as you get their money before they die. [0] https://academic.oup.com/ntr/article-abstract/14/1/79/104820...
[1] https://news.harvard.edu/gazette/story/2021/09/oil-companies...
[2] https://en.wikipedia.org/wiki/Sackler_family
[3] https://en.wikipedia.org/wiki/Radithor | | |
| ▲ | Lerc 2 days ago | parent [-] | | >No, the incentive to hide harm is being given as a reason that studies into harm would be suppressed, not as evidence of harm in and of itself. This is a direct response to your original remark that "Part of me thinks that if the case against social media was stronger, it would not be being litigated on substack." That seems like a fair argument. I don't think it means that it grants opinions the weight of truth. I think it would make it fair to identify and criticise suppression of research and advocate for a mechanism by which such research can be conducted. An approach that I would support in this area was a tax or levy on companies with large numbers of users that could be ear-marked for funding independent research regarding the welfare of their user base and on society as a whole. >Short term gains always take precedence. That seems a far worthier problem to address. >If you dig through history there are hundreds of examples of companies knowingly harming their users I don't deny that these things exist, I simply believe that it is not inevitable. | | |
| ▲ | Eisenstein 2 days ago | parent [-] | | > That seems a far worthier problem to address. If we can't fix the underlying problem immediately, treating the symptoms seems reasonable in the meantime. |
|
|
|
|
| |
| ▲ | Tade0 2 days ago | parent | prev [-] | | > If social media turns up the anger that much, I don't think it's worth the cost. It doesn't. It's just that when people can publish whatever with impunity, they do just that. Faced with the reality of what they're calling for they would largely stop immediately. I believe the term for that is "keyboard warrior". | | |
| ▲ | NeutralCrane 2 days ago | parent [-] | | What you are describing is the mechanism by which social media turns up the anger. |
|
|
|
| ▲ | kmacdough 2 days ago | parent | prev | next [-] |
| I'm with you on the skepticism, but I also think the underlying point is worth acknowledging: Social media represents a step change in how we consume news about current events. No longer are there central sources relied on by huge swaths of the population. Institutions which could be held accountable as a whole and stood to lose from poor reporting. Previous behemoths like NYT, WaPo, Bloomberg are now comparatively niche and fighting for attention. This feels so obvious it's not necessary to litigate, but if someone has statistics to the contrary, I'll be happy to look deeper and re-evaluate. I agree, one should not immediately succumb to fear of the new. At the same time, science is slow by design. It takes years to construct, execute and report on proper controlled studies. Decades to iterate and solidify a holistic analysis. In the mean time, it seems naive to run forward headlong, assuming the safest outcome. We'll have raised a generation or two before we can possibly reach analytical confidence. Serious irreparable damage could be done far before we have a chance to prove it. |
|
| ▲ | make_it_sure 2 days ago | parent | prev | next [-] |
| seems that you're the guy that likes to be against the norm, even if you're wrong. Social media being controlled by corporations and algorithms built to create addiction should be enough, unless you have other motives to ignore all this. |
|
| ▲ | 2 days ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | nathan_compton 2 days ago | parent | prev | next [-] |
| All this is good except that to achieve any kind of actual political action in this actual universe in which we live, we must use rhetoric. Asking people to be purely rational is asking them to fail to change anything about the way our culture works. |
|
| ▲ | nobodywillobsrv 2 days ago | parent | prev | next [-] |
| The problem is this kind of long form "thinks" miss the basics and even uses polarising denialist phrases like "fear mongering" There is a an obvious incoherence and even misreasoning present in the people most ruined by the new media. For example, you might want to drive the risk of something to zero. To do that, you need to calmly respond with policy every bad event of that type with more restrictions at some cost. This should be uncontentious to describe yet again and again the pattern is to mistake the desires, the costs and the interventions. I can't even mention examples of this without risking massive karma attacks. That is the state of things. I used to think misreasoning was just something agit prop accounts did online but years ago started hearing the broken calculus being spoken by IRL humans. We need a path forward to make people understand they should almost all disagree but they MUST agree on how they disagree else they don't actually disagree.They are just barking animals waiting for a chance to attack. |
|
| ▲ | boppo1 2 days ago | parent | prev | next [-] |
| The Nepalese just elected a govenment on Discord. Who says we can’t litigate on substack? Hell, it might be the future. |
|
| ▲ | beowulfey 2 days ago | parent | prev | next [-] |
| There are a lot biochemical hypotheses for why social media is unhealthy that I personally buy into. |
|
| ▲ | keybored 2 days ago | parent | prev | next [-] |
| > Part of me thinks that if the case against social media was stronger, it would not be being litigated on substack. For all we know there are millions who have withdrawn and are making the case outside of social media. Or living the case. This reply seems a bit fish-in-water to me. |
|
| ▲ | techpineapple 2 days ago | parent | prev | next [-] |
| I think the problem with social media is it’s easy to exploit, all the most powerful people in the world perceive themselves to benefit from social media. This isn’t true for something like smoking. |
|
| ▲ | logicchains 3 days ago | parent | prev [-] |
| There's a concerted assault on social media from the powers that be because social media is essentially decentralised media, much harder for authoritarians to shape and control than centralised media. Social media is why the masses have finally risen up in opposition to what Israel's been doing in Gaza, even though the genocide has been going on for over half a century: decentralised information transmission allowed people to see the reality of what's really going on there. |
| |
| ▲ | beeflet 2 days ago | parent [-] | | It's not decentralized at all. It represents a total commercialization of the town square. The situation you reference with regard to Israel/Gaza is only possible because TikTok is partially controlled by Chinese interests. But it also goes to show that TikTok could have easily been banned or censored by western governments. Just kick them off the App Stores and block the servers. For example, there is no support Net Neutrality in the USA that would defend them if the government wanted to quietly throttle their network speed. Social media as it exists now is not decentralized in any meaningful capacity. |
|