Remix.run Logo
Nearly a third of social media research has undisclosed ties to industry(science.org)
259 points by bikenaga 10 hours ago | 106 comments
everdrive 6 hours ago | parent | next [-]

Social media itself is a grand experiment. What happens if you start connecting people from disparate communities, and then prioritize for outrage and emotionalism? In years prior, you would be heavily shaped by the people you lived near. TV and internet broke this down somewhat, but social media really blew the doors off. Now it's the case that almost no one seems to be able to explain all the woes we're facing today: extreme ideas, populism, the destruction of institutions. All of this because people are addicted to novelty and outrage, and because companies need their stock price to go up.

slg 5 hours ago | parent | next [-]

>and then prioritize for outrage and emotionalism

This isn’t inherent to social networks though. It is a choice by the biggest social media companies to make society worse in order to increase profits. Just give us a chronological feed of the people/topics we proactively choose to follow and much of this harm would go away. Social media and the world were better places before algorithmic feeds took over everything.

frogpelt an hour ago | parent | next [-]

Before there was social media there was click bait headlines from supposedly reputable news agencies.

Social media gave people easy ways to engage and share. And it turns out what people engage with and share is click bait/rage bait.

So maybe not technically inherent but a natural consequence of creating networks for viral sharing of content.

ambicapter an hour ago | parent | prev | next [-]

The existence of "yellow journalism" in the 19th century would disagree with that statement. That outrage and emotionalism trigger human's attention more so than other feelings is a biological fact that has been exploited for centuries and centuries. The same way gambling has been around for the entirety of recorded human history. It's a default behavior pattern installed in every humans, some can override, most don't.

dathinab 5 hours ago | parent | prev | next [-]

> This isn’t inherent to social networks though. It is a choice by the biggest social media companies and to make society worse in order to increase profits.

going beyond social media it's IMHO the side effect of a initially innocent looking but dangerous and toxic monetization model which we find today not just in social media but even more so in news, apps and most digital markets

betty_staples 12 minutes ago | parent | next [-]

I will say I am strongly against what social media algorithms do.

But I am fascinated by the black mirror element. They said "hey algorithm what gets likes, what gets views - look into human nature and report back" - "human nature shows polarization does! emotional charged divisive content!" - that's just fascinating. Not learning, philosphophy, growth, education, health, no, the naughty stuff, the bad stuff. That's not to exonerate social media companies, no, such would be the same as exonerating big tobacco for what they do.

trevwilson 5 hours ago | parent | prev | next [-]

Yeah unfortunately this seems to be a common, if not inevitable, result of any product where "attention" or "engagement" are directly correlated with profitability.

slg 5 hours ago | parent | prev [-]

And if we want to go beyond that, we really just have to blame capitalism. What happens when you build a society around the adversarial collection of money? You get a society that by and large prioritizes making money above all else including ethics and morals.

makingstuffs 2 hours ago | parent | next [-]

That and the fact that money and media presence is essentially what wins elections. The only way we can really have democracy is with a truly informed populace and the only way people can make a truly informed vote without all the noise is to have anonymous voting. By which I mean you do not know which politician/party you are voting for, you just know the policies they have promised to enforce.

Further to that, there needs to be accountability. Right now, in the UK at least, governments are not held to account, at all. They get into office with grand promises of flying elephants and golden egg laying geese but obviously never follow through with said promises. The populace, ultimately, just shrugs it off with ‘politicians lie’ and continue complaining about it within their social circles.

Our political systems are fundamentally broken. We shouldn’t care if policies are from party A or party B. All that should matter is the content of the policy and whether it is ever actually materialised.

Right now we have a situation where people are manipulated left, right and centre into believing a given party’s absolute BS manifesto which they write under the full knowledge that not delivering will have very little impact on them as they’ve just had a substantial amount of time getting paid lucrative salaries to essentially argue with a bunch of other liars in a shouting match on tele.

Remove the football-esque fandom which applies to political parties by removing any ability to publicly affiliate any given person with said party and I’d bet we see different results across the bar. Remove all this absolute nonsense of politicians promoting their ideologies on TV/Twotter etc and you will remove a lot of the brainwashing which happens. Remove the most corrupt situation of all: private firms and individuals being able to fund political parties and you level the playing field.

Obviously this is a hard pill for many to swallow as no one likes to be told they’ve essentially been brainwashed into their thoughts and ego is everything in modern society.

Zigurd 4 hours ago | parent | prev | next [-]

It's the combination of software that is infinitely malleable, and capitalism. Successful entrepreneurs in software want liquidity. So no matter how benevolent they start out being, they eventually lose control and the software gets turned into an exploitative adversary to satisfy investor owners.

This is fine if you can refuse the deal. Lots of software and the companies selling it have died that way. But if you've made a product addictive or necessary for everyday survival, you have the customer by the short hairs.

The technology underlying Bluesky is deliberately designed so that it's hard to keep a customer captive. It will be interesting to see if that helps.

dathinab 4 hours ago | parent | prev [-]

yes but it's more complicated

like if you look at original reasoning why capitalism is a good match for democracy you find arguments like voting with money etc. _alongside with what things must not be tolerated in capitalism_ or it will break. And that includes stuff like:

- monopolies, (or more generic anything having too much market power and abusing it, doesn't need to be an actual monopoly)

- unfair market practices which break fair competition

- situations which prevent actual user choice

- to much separation of the wealth of the poorest and richest in a country

- giving to much ways for money to influence politics

- using money to bare people from a fair trail/from enforcing their rights

- also I personally would add in-transparency, but I think that only really started to become a systemic issue with globalization and the digital age.

This also implies that for market wich have natural monopolies strict regulation and consumer protection is essential.

Now the points above are to some degree a check list of what has defined US economics, especially in the post-Amazone age (I say post Amazone age as the founding story of Amazone was a mile stone and is basically the idea of "let's systematically destroy any fair competition and used externally sourced money (i.e. subsidization) to forcefully create a quasi monopoly", and after that succeeded it became somewhat of the go-to approach for a lot of "speculative investment" founding).

Anyway to come back to the original point.

What we have in the US has little to do with the idea of capitalism which lead to the adoption of it in the West.

It's more like someone took it is twisting it into the most disturbing dystopian form possible, they just aren't fully done yet.

slg 3 hours ago | parent [-]

>- giving to much ways for money to influence politics

I think what we're learning is that mass (social) media means that this simply isn't preventable in a world with free speech. Even if the US had stricter campaign finance laws in line with other western democracies, there still needs to be some mechanism so that one rich guy (or even a collection of colluding rich guys) can't buy a huge megaphone like Twitter or CBS.

As long as there is no upper limit on wealth accumulation, there is no upper limit on political influence in a capitalistic democracy with free speech. Every other flaw you list is effectively downstream of that because the government is already susceptible to being compromised by wealth.

numpad0 2 hours ago | parent | prev | next [-]

  >> Social media itself is a grand experiment. What happens if you start connecting people from disparate communities, and then prioritize for outrage and emotionalism?  
  > It is a choice by the biggest social media companies to make society worse in order to increase profits.
I think there can be more pointy way to frame this ongoing phenomenon, such as that, the US invested in social media assuming it'll be the mainstay of its cultural dominance into the 21st century and it wasn't, but more of a giant oil pipeline with a check valve for US to be completely prone to East Asian influence, and it's scrambling at damage control.

US as it is has no cultural industrial base to produce social media contents. East Asian contents, if not East Asian indigenous social media, easily win the Internet leveraging universally strong public education, without even being intentional. That's what happened, and that must be the intent of shift into rage political shows which the US/EU can at least produce, even if it weren't useful.

ChrisMarshallNY 2 hours ago | parent [-]

I’m not sure that’s especially new. In the 1990s, we had Japanophiles, in the US. Right now, Korea is taking the crown. We’ll have to see who’s next. Might be China, but their culture is so controlled, that it might not happen. Russia was starting to have a lot of cultural influence, until Iron Curtain 2.0 slammed down.

Viral culture requires a certain amount of freedom of expression, along with access to media.

skybrian 3 hours ago | parent | prev | next [-]

It sure seems inherent to me. You get outrage and emotionalism even in small Internet forums. Moderation is necessary to damp it down.

dylan604 4 hours ago | parent | prev | next [-]

bigMedia has been doing this longer than the socials. The socials just took the knob and turned it to 11.

Aurornis 4 hours ago | parent | prev [-]

> Social media and the world were better places before algorithmic feeds took over everything

Some times I feel like I'm the only one who remembers how toxic places like Usenet, IRC, and internet forums were before Facebook. Either that, or people only remember the past of the internet through rose colored glasses.

Complain about algorithmic feeds all you want, but internet toxicity was rampant long before modern social media platforms came along. Some of the crazy conspiracy theories and hate-filled vitriol that filled usenet groups back in the day makes the modern Facebook news feed seem tame by comparison.

linguae 3 hours ago | parent | next [-]

I agree that there’s always been toxicity on the Internet, but I also feel it’s harder to avoid toxicity today since the cost of giving up algorithmic social media is greater than the cost of giving up Usenet, chat rooms, and forums.

In particular, I feel it’s much harder to disengage with Facebook than it is to disengage with other forms of social media. Most of my friends and acquaintances are on Facebook. I have thought about leaving Facebook due to the toxic recommendations from its feed, but it will be much harder for me to keep up with life events from my friends and acquaintances, and it would also be harder for me to share my own life events.

With that said, the degradation of Facebook’s feed has encouraged me to think of a long-term solution: replacing Facebook with newsletters sent occasionally with life updates. I could use Flickr for sharing photos. If my friends like my newsletters, I could try to convince them to set up similar newsletters, especially if I made software that made setting up such newsletters easy.

No ads, no algorithmic feeds, just HTML-based email.

eddythompson80 3 hours ago | parent | prev | next [-]

You’re absolutely right. Shocking, rage bait, sensational content was always there in social media long before algorithmic feeds. As a matter of fact “algorithmic feeds” were in a way always there it’s just that back in the day those “algorithms” were very simple (most watched/read/replies today, this week, this month. Longest, shortest, newest, oldest, etc)

I think the main thing algorithmic feeds did was present the toxicity as the norm, as opposed to it being a choice you make. Like I used to be part of a forum back in the early 2000s. Every few weeks the top most replied thread would be some rage bait, or sensational thread. those threads will keep getting pushed to the top and remain at the top of the forum for a while and grow very quickly as a ton of people keep replying and pushing it to the top. But you could easily see that everyone else is carrying on with their day. You ignore it and move on. You sort by newest or filter it out and you’re good. It was clear that this is a particular heated thread and you can avoid it. Also mods would often move it to a controversial sub forum (or lock it all together if they were heavy handed) So you sort of had to go out of your way to get there and then you would know that you are actively walking into a “controversial section” or “conspiracy” forum etc. It wasn’t viewed as normal. You were a crazy person if you kept linking and talking about that crazy place.

With algorithmic feeds, it’s the norm. You’re not seeking and getting to shady corners of the internet or subscribing to a crazy usenet newsgroup to feed your own interest in rage or follow a conspiracy. You are just going to Facebook or twitter or Reddit or YouTube homepage. Literally the most mainstream biggest companies in the US homepages. Just like every one else.

csnover 3 hours ago | parent | prev [-]

You aren’t the only one who remembers. But in that time it was a self-selecting process. The problem with “the algorithm”, as I see it, is not that it increases the baseline toxicity of your average internet fuckwad (though I do think the algorithm, by seeking to increase engagement, also normalises antisocial behaviour more than a regular internet forum by rewarding it with more exposure, and in a gamified way that causes others to model that antisocial behaviour). Instead, it seems to me that it does two uniquely harmful things.

First, it automatically funnels people into information silos which are increasingly deep and narrow. On the old internet, one could silo themselves only to a limited extent; it would still be necessary to regularly interact with more mainstream people and ideas. Now, the algorithm “helpfully” filters out anything it decides a person would not be interested in—like information which might challenge their world view in any meaningful way. In the past, it was necessary to engage with at least some outside influences, which helped to mediate people’s most extreme beliefs. Today, the algorithm successfully proxies those interactions through alternative sources which do the work of repackaging them in a way that is guaranteed to reinforce, rather than challenge, a person’s unrealistic world view.

Many of these information silos are also built at least in part from disinformation, and many people caught in them would have never been exposed to that disinformation in the absence of the algorithm promoting it to them. In the days of Usenet, a person would have to get a recommendation from another human participant, or they would have to actively seek something out, to be exposed to it. Those natural guardrails are gone. Now, an algorithm programmed to maximise engagement is in charge of deciding what people see every day, and it’s different for every person.

Second, the algorithm pushes content without appropriate shared cultural context into faces of many people who will then misunderstand it. We each exist in separate social contexts with in-jokes, shorthands for communication, etc., but the algorithm doesn’t care about any of that, it only cares for engagement. So you end up with today’s “internet winner” who made some dumb joke that only their friend group would really understand, and it blows up because to an outsider it looks awful. The algorithm amplifies this to the feeds of more people who don’t have an appropriate context, using the engagement metric to prioritise it over other more salient content. Now half the world is expressing outrage over a misunderstanding—one which would probably never have happened if not for the algorithm boosting the message.

Because there is no Planet B, it is impossible to say whether things would be where they are today if everything were the same except without the algorithmic feed. (And, of course, nothing happens in a vacuum; if our society were already working well for most people, there would not be so much toxicity for the algorithm to find and exploit.) Perhaps the current state of the world was an inevitability once every unhinged person could find 10,000 of their closest friends who also believe that pi is exactly 3, and the algorithm only accelerated this process. But the available body of research leads me to conclude, like the OP, that the algorithm is uniquely bad. I would go so far as to suggest it may be a Great Filter level threat due to the way it enables widespread reality-splitting in a geographically dispersed way. (And if not the recommendation algorithm on its own, certainly the one that is combined with an LLM.)

rapatel0 2 hours ago | parent | prev | next [-]

>and then prioritize for outrage and emotionalism

The concept is called "yellow journalism" and extends basically to the days of Joseph Pulitzer and William Randolph Hearst. Modern culture has poured gasoline on this but it's existed forever.

I think the issue is that we have scaled groupthink--people now engage in circular conversations that reinforce nonsensical beliefs. Where as they might have historically encountered 1 or 2 people that agreed with crazy or inaccurate notions and most of their environment would likely push back on outrageous ideas.

Now, you can find 1000s of people that not only agree, but reinforce biases with other facts, perceptions.

rr808 6 hours ago | parent | prev | next [-]

Its interesting that TV is regulated. You can't put certain content on there and I'm sure the governments can ultimately control things. Now todays eyeballs are controlled by Meta and TikTok and I dont really trust them at all - they have too much unchecked power.

Aurornis 5 hours ago | parent | next [-]

> I'm sure the governments can ultimately control things

In the US there is free speech protecting the ability of people to say what they want.

Public TV has limitations on broadcast of certain material like pornography, obviously, but the government can’t come in and “control” the opinions of journalists and newscasters.

The current US admin has tried to put pressure on broadcasters it disagrees with and it’s definitely not a good thing.

You really do not want to encourage governments to “control” what topics cannot be discussed or what speech is regulated. Sooner or later the government will use that against someone you agree with for their own power.

dathinab 5 hours ago | parent | prev [-]

> interesting that TV is regulated

soso, it's mostly that freely accessible channels need their content to be in a certain ~PG/age protection range (and in many countries that also changes depending on the time of the day, not sure about the US)

beyond that the constitution disallows any further regulation of actual content

through that doesn't mean that they can't apply subtle pressure indirectly.

Is that legal? no.

Anyway done for years? yes.

But mostly subtle not forced, i.e. let's say you give "suggestions" not required changes.

Except in recent years it has become a lot less subtle and much more forced. Not just giving non binding "suggestions" but also harass media outlets in other seemingly unrelated ways if they don't follow your "suggestions".

PS: Like seriously it often looks like the US doesn't really understand what free speech is about (as in some of the more important points are freedom of journalism, teaching and also showing your opinions through demonstrations and similar.). And why many historians find it good but suboptimal and why e.g. the approach to free speech was revisited when drafting the west German constitution instead of just more or less copying the US constitution (the US but also France, UK had some say in the drafting of it, it was originally meant to be temporary until reunification, but in the end was mostly kept verbatim during unification as it worked out quite well).

decipherer 6 hours ago | parent | prev | next [-]

We have exited the age of information, and entered the age of irritation.

HPsquared 6 hours ago | parent [-]

"Rage bait" is 2025 Oxford Word of the Year. We are reaching saturation levels now, I think, where people are becoming aware of it.

https://corp.oup.com/news/the-oxford-word-of-the-year-2025-i...

johnnyanmac 5 hours ago | parent | next [-]

Exposure is a good first step. But what action is taken with that awareness? We seem to be in that post truth era where being told what's happening before your eyes is at best met with apathy and at worst rejected.

01HNNWZ0MV43FF 6 hours ago | parent | prev [-]

I'm glad people are aware of it. 7 years I heard about The Scissor on SSC and wasn't sure if people would ever believe it: https://slatestarcodex.com/2018/10/30/sort-by-controversial/

nurettin 2 hours ago | parent | prev | next [-]

> people are addicted to novelty and outrage, and because companies need their stock price to go up

Sounds like news broadcasts. Throw in some politics, murders, rapes and economic downturns and you've got your audience hooked watching through the ads.

cal_dent 5 hours ago | parent | prev | next [-]

Throw into the mix inherent mimetic desire and where we are in society makes sense. There's a need for a more that frankly can't be satisfied and hard to see how we turn back from that without a structural rejig

gjsman-1000 6 hours ago | parent | prev [-]

> the destruction of institutions

More like the exposure of institutions. It’s not like they were more noble previously, their failings were just less widely understood. How much of America knew about Tuskegee before the internet? Or the time National Geographic told us all about the Archaeoraptor ignoring prior warnings?

The above view is also wildly myopic. You thought modern society overcame populist ideas, extreme ideas, and social revolution being very popular historically? Human nature does not change.

Another thing that doesn’t change? There are always, as evidenced by your own comment, always people saying the system wasn’t responsible, it’s external forces harming the system. The system is immaculate, the proletariat are stupid. The monarchy didn’t cause the revolution, ignorant ideologues did. In any other context, that’s called black and white thinking.

cluckindan 5 hours ago | parent [-]

Maybe social media is just another level of unethical human experimentation by corporations.

https://en.wikipedia.org/wiki/Unethical_human_experimentatio...

Grimblewald 7 hours ago | parent | prev | next [-]

From history we know that research left unchecked and unrestricted can start leading to some really dark and horrible things. Right now I think it's a problem that social media companies can do research without answering to the same regulatory bodies that regular academics / researchers would. For example, they don't have to answer to independant ethics committees / reviews. They're free to experiement as they like on the entire population.

I never understood why this doesn't alarm more people on a deep level.

Heck you wouldn't get ethics approval for animal studies on half of what we know social media companies do, and for good reason. Why do we allow this?

terminalshort 7 hours ago | parent | next [-]

What counts as research? If I make UI changes, I guess it's ok to roll it out to everyone, because that's not an experiment, but if I roll it out to 1%, then that's research? If I own two stores and decide to redecorate one and see if sales increase vs the other store, do I need government approval?

Also I would like an example of something a social media company does that you wouldn't be able to get approval to do on animals. That claim sounds ridiculous.

CodingJeebus 7 hours ago | parent | next [-]

> Also I would like an example of something a social media company does that you wouldn't be able to get approval to do on animals.

One possible example is the emotion manipulation study Facebook did over a decade ago[0]. I don't know how you would perform an experiment like this on animals, but Facebook has demonstrated a desire to understand all the different ways its platform can be used to alter user behavior and emotions.

0: https://www.npr.org/sections/alltechconsidered/2014/06/30/32...

terminalshort 7 hours ago | parent [-]

Isn't this just what every media company has done since the beginning of time? You think the news companies don't select their stories based on the same concept? And I'm pretty sure you would get approval to do something similar to animals given that you can get approval to actually feed them drugs and see how that affects their behavior.

anonymars 6 hours ago | parent [-]

Can you provide evidence that [non-social] media companies have performed research specifically to see if they can make people sadder, similar to what was described above?

terminalshort 6 hours ago | parent [-]

Turn on cable news for a minute and it's quite obvious that it is designed to make you angry. What difference does it make if they performed research or not?

Grimblewald an hour ago | parent | prev | next [-]

Well, the definition is simply "the systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions"

However, I'd like to narrow this to "The systematic investigation into, and manipulation of, a system in order to map variables that can be exploited for material gain"

The reason this is important is because it's important someone can tinker with their website and test out new aesthetics etc.

As for an example of something you'd be hard pressed to get approval for: Body Image & Social Defeat (The Instagram Study). This would be extremely unlikely to pass. Given the research goal would translate to "we wish to see 'how much bullying makes them stay in the cage longer'" then it would be rejected as gratuitous cruelty.

bearseascape 13 minutes ago | parent | prev | next [-]

> What counts as research?

You might be aware of this, but most big tech companies (i.e. the ones with massive user counts) don't just let you roll out UI changes to everyone, because they know that this has a downstream impact on users. So they often A/B test those things, which is literally an experiment: you randomize who sees what, measure outcomes, and ship whatever wins. There are many data scientists employed in industry to set up and analyze experiments like this.

Also, it seems clear that this not harmless research. Everyone is aware of the effect social media has on our mental health (see the under-16 social media ban in Australia). Facebook definitely knows this, e.g. 2014 there was a big controversy over their News Feed “emotional contagion” study, where they altered what content people saw to measure changes in sentiment, without meaningful informed consent [1][2].

> Also I would like an example of something a social media company does that you wouldn't be able to get approval to do on animals. That claim sounds ridiculous.

This misses the main point: the issue is that for these experiments (and they are experiments) there is often no independent approval mechanism in the first place. Facebook, after receiving backlash, does have privacy/integrity/safety teams now which review these experiments, they are far from being independent third parties.

[1] [https://www.theatlantic.com/technology/archive/2014/06/every...](https://www.theatlantic.com/technology/archive/2014/06/every...) [2] [https://www.sciencenews.org/blog/scicurious/main-result-face...](https://www.sciencenews.org/blog/scicurious/main-result-face...)

Aurornis 4 hours ago | parent | prev | next [-]

> What counts as research? If I make UI changes, I guess it's ok to roll it out to everyone, because that's not an experiment, but if I roll it out to 1%, then that's research?

I think this is a good example of how disconnected and abstract the conversations about social media have become. There's a common theme in these HN threads where everything social media companies do is talked about like some evil foreign concept, but if any of us were to do basic A/B testing on a website then that's understandable.

Likewise, the dissonance of calling for heavy regulations on social media sites or restrictions on freedom of speech is ironic given that Hacker News fits the definition of a social media site with an algorithmic feed. There's a deep otherness ascribed to what's called social media and what gets a pass.

It gets really weird in the threads demanding ID verification for social media websites. I occasionally jump into those threads and ask those people if they'd be willing to submit to ID verification to use Hacker News and it turns into mental gymnastics to claim that Hacker News (and any other social platforms they use like Discord or IRC) would be exempt under their ideal laws. Only platforms other people use would be impacted by all of these restrictions and regulations.

advael 6 hours ago | parent | prev | next [-]

I think by now roughly half of us grew up in a world where global reach has been simply taken for granted. I don't think it's particularly onerous to say that there should be some oversight on what a business can and can't do in the context where that business is relying on public infrastructure and can affect the whole-ass world, personally

terminalshort 5 hours ago | parent [-]

There is oversight. Just not oversight of their UI design and algorithm, which is what people are calling for here. Regulation of the feed algorithm would be a massive 1A violation.

Not sure what public infrastructure has to do with it. Access to public infrastructure doesn't confer the right to regulate anything beyond how the public infrastructure is used. And in the case of Meta, the internet infrastructure they rely on is overwhelmingly private anyway.

mikem170 5 hours ago | parent [-]

If algorithm output is protected by the 1st amendment then perhaps Section 230 [0] protections should no longer apply, and they should be liable for what they and their algorithms choose to show people.

[0] https://en.wikipedia.org/wiki/Section_230

advael 3 hours ago | parent | next [-]

That seems a hell of a lot better than repealing section 230 altogether. I also agree with the rest of this argument. Either the editorial choices made by an algorithm are a neutral platform or they're protected speech. They certainly aren't just whatever's convenient for a tech company in any given moment

terminalshort 3 hours ago | parent | prev [-]

What has that got to do with anything? Not at all related to the topic of research or algorithmic control. All that does is make companies potentially liable if somebody slanders somebody else on their site.

fergal 5 hours ago | parent | prev | next [-]

Nice example there to trivialize and confues the issue but yea if your hypothetical store redecorating has a public health impact on a large scale then you should need approval.

shimman 7 hours ago | parent | prev [-]

Are you being serious right now or just engaging in "asking questions" to suppress others thoughts? Why are these types of comments so common on this site? No obviously we aren't in fact talking about making basic code changes, but maybe if those changes are being consistently done that clearly show users getting more depressed or alienated it should be questioned more and finally regulated.

Fun fact, the last data privacy law the US passed was about video stores not sharing your rentals. Maybe it's time we start passing more, after all it's not like these companies HAVE to conduct business this way.

It's all completely arbitrary, there's no reason why social media companies can't be legally compelled to divest from all user PII and be forced to go to regulated third party companies for such information. Or force social media companies to allow export of data or forcing them to follow consistent standards so competitors can easily enter the realm and users can easily follow too.

You can go for the throat and say that social media companies can't own an advertising platform either.

Before you go all "oh no the government should help the business magnates more, not the users." I suggest you study how monopolies existed in the 19th century because they look no different than the corporate structure of any big tech company, and see how government finally regulated those bloodsuckers back then.

terminalshort 7 hours ago | parent | next [-]

> Are you being serious right now or just engaging in "asking questions" to suppress others thoughts?

I must be really good at asking questions if they have that kind of power. So here's another. How would we ever even know those changes were making users more depressed if the company didn't do research on them? Which they would never do if you make it a bureaucratic pain in the ass to do it.

And, no, I would much rather the companies that I explicitly create an account and interact with to be the ones holding my data rather than some shady 3rd parties.

BeetleB 5 hours ago | parent | prev | next [-]

> Are you being serious right now or just engaging in "asking questions" to suppress others thoughts?

I don't know why people are being overly reactive to the comment.

Research means different things to different people. For me, research means "published in academic journals". He is merely trying to get everyone on the same page before a conversation ensues.

cortesoft 6 hours ago | parent | prev [-]

I don’t think it is fair to criticize the person you are responding to for asking the question they did.

These types of comments are common on this site because we are actually interested in how things work in practice. We don’t like to stop at just saying “companies shouldn’t be allowed to do problematic research without approval”, we like to think about how you could ever make that idea a reality.

If we are serious about stopping problematic corporate research, we have to ask these questions. To regulate something, you have to be able to define it. What sort of research are we trying to regulate? The person you replied to gave a few examples of things that are clearly ‘research’ and probably aren’t things we would want to prevent, so if we are serious about regulating this we would need a definition that includes the bad stuff but doesn’t include the stuff we don’t want to regulate.

If we don’t ask these questions, we can never move past hand wringing.

BeetleB 5 hours ago | parent | prev | next [-]

> Right now I think it's a problem that social media companies can do research without answering to the same regulatory bodies that regular academics / researchers would. For example, they don't have to answer to independant ethics committees / reviews. They're free to experiement as they like on the entire population.

If they are going to publish in academic journals, they will have to answer to those bodies. Whether those bodies have any teeth is a whole other matter.

Grimblewald 2 hours ago | parent [-]

Well, journals requiring ethics is not the reason research requires ethics approval. It is just that journals ask to help ensure due process is occurring. We need ethics approval, because prior to having ethics etc. researchers did some truly awful things in the name of science, often with no benefit/gain.

noufalibrahim an hour ago | parent | prev | next [-]

Tangentially related. The idea that scientific research operates in a vacuum uninfluenced by real world considerations in its relentless search for truth is a notion that a lot of scientism advocates put out.

I've always found the idea laughable and this is a good example of that.

vovavili 6 hours ago | parent | prev [-]

>Right now I think it's a problem that social media companies can do research without answering to the same regulatory bodies that regular academics / researchers would. For example, they don't have to answer to independent ethics committees / reviews.

These bodies are exactly what makes academia so insufferable. They're just too filled with overly neurotic people who investigate research way past the point of diminishing returns because they are incentivized to do so. If I were to go down the research route, there is no way I wouldn't want to do in a private sector.

Grimblewald 2 hours ago | parent [-]

While I agree they're a pain, you need only look at the long list of experiments that happened prior that were used as a justification for why we need this regulation, and why it was implemented with very little resistance.

bikenaga 10 hours ago | parent | prev | next [-]

Original article: "Industry Influence in High-Profile Social Media Research" - https://arxiv.org/abs/2601.11507

Abstract: "To what extent is social media research independent from industry influence? Leveraging openly available data, we show that half of the research published in top journals has disclosable ties to industry in the form of prior funding, collaboration, or employment. However, the majority of these ties go undisclosed in the published research. These trends do not arise from broad scientific engagement with industry, but rather from a select group of scientists who maintain long-lasting relationships with industry. Undisclosed ties to industry are common not just among authors, but among reviewers and academic editors during manuscript evaluation. Further, industry-tied research garners more attention within the academy, among policymakers, on social media, and in the news. Finally, we find evidence that industry ties are associated with a topical focus away from impacts of platform-scale features. Together, these findings suggest industry influence in social media research is extensive, impactful, and often opaque. Going forward there is a need to strengthen disclosure norms and implement policies to ensure the visibility of independent research, and the integrity of industry supported research. "

cheriot 7 hours ago | parent | prev | next [-]

We need an update of Thank You for Smoking

simplisticelk 5 minutes ago | parent [-]

You should give it a rewatch keeping in mind that it was financed by Peter Thiel and Elon Musk. The message of the film comes across a bit differently in that light; much more libertarian/anti-government. Enjoyable film nonetheless!

fnoef 7 hours ago | parent | prev | next [-]

Would it be appropriate to use :surprised_pikachu_face:?

I meant, I no longer know who to trust. It feels like the only solution is to go live in a forest, and disconnect from everything.

greggoB 7 hours ago | parent | next [-]

This has been my default expected reaction since Nov 2024. So I'd say so.

Also feel you wrt living in a forest and leaving this all behind.

username223 3 hours ago | parent | prev [-]

> It feels like the only solution is to go live in a forest, and disconnect from everything.

As much as I approve of living in forests, you don't need to go that far. Tech bros are fond of things being "frictionless," so add some friction. Delete the social media apps from your phone and use their websites instead. Don't bookmark the sites, but make yourself type in the URLs each time you want to visit. If each visit is intentional, instead of something you do automatically when you're bored, you'll have a better experience.

BurningFrog 7 hours ago | parent | prev | next [-]

Keep in mind that those qualified to do research in a field typically has worked in that industry.

Because that's where people with that expertise work.

duskwuff 5 hours ago | parent [-]

And, in many cases, because that's where funding exists.

This comes up somewhat frequently in discussions of pet food. Most of the companies doing research into pet food - e.g. doing feeding studies, nutritional analysis, etc - are the manufacturers of those foods. This isn't because there's some dark conspiracy of pet food companies to suppress independent research; it's simply because no one else is funding research in the field.

dzink 6 hours ago | parent | prev | next [-]

How do you do objective research without a data pipeline? Social media companies can use user privacy as an excuse to not share feeds that influence users. The first step to fixing the wrongs is transparency, but there are no incentives for big tech to enable that.

austin-cheney 7 hours ago | parent | prev | next [-]

I bet the same is true with AI and bitcoin social media posts and research.

fenwick67 7 hours ago | parent [-]

And cigarettes and fossil fuels

deepriverfish 2 hours ago | parent | prev | next [-]

I'm surprised it's this low with how shady the social media industry is.

potato3732842 5 hours ago | parent | prev | next [-]

Literally every industry is like this.

Academia is basically a reputation laundering industry. If the cigarette people said smokes good or the oil people you'd never believe them. But they and their competitors fund labs at universities, and sure those universities may publish stuff they don't like from time to time, but overall things are gonna trend toward "not harmful to benefactors". And then what gets published gets used as the basis for decisions on how to direct your tax dollars, deploy state violence for or against certain things, etc, etc. And of course (some of) the academics want to do research that drives humanity forward or whatever, but they're basically stuck selling their labor to (after several layers in between) the donors for decades in order to eek out a little bit of what they want.

It's not just "how the sausage is made" that's the problem. It's who you're sourcing the ingredients for, who you're paying off for the permit to run the factory, who's supplying you labor. You can't fix this with minor process adjustments.

chaps 6 hours ago | parent | prev | next [-]

Surprised it's not more, but it makes sense when you consider the sources of the data. Gotta have data sharing agreements, yeah?

princevegeta89 6 hours ago | parent | prev | next [-]

No surprise. Social media is a shithole.

h4kunamata 6 hours ago | parent | prev | next [-]

Since when is this news??

Whole industries are paid for decades, the hope are the independent journalists with no ties to anybody but the public they wanna reach.

Find one independent journalist on YT with lots of information and sources for them, and you will noticed how we have been living in a lie.

blackqueeriroh 2 hours ago | parent | prev | next [-]

Lol, should check out the reality of obesity science

Jadiiee 7 hours ago | parent | prev | next [-]

My jaw stayed in place

devradardev 5 hours ago | parent | prev | next [-]

This is a clever approach to reduce token usage. In my experience with Gemini 3 for code analysis, the biggest bottleneck isn't just the logic, but the verbosity of standard languages consuming the context window. A targeted intermediate language like this could make 'thinking' models much more efficient for complex tasks.

irishcoffee 4 hours ago | parent [-]

Almost like… a simplified set of instructions you would give a computer that get distilled down into machine code that executes on bare metal!

shevy-java 8 hours ago | parent | prev | next [-]

A system built to yield to perfect lobbyism.

schmuckonwheels 6 hours ago | parent | prev | next [-]

Experts say

hsuduebc2 6 hours ago | parent | prev | next [-]

I’m half expecting headlines thirty years from now to talk about social media the way we now talk about leaded gasoline, a slow, population-wide exposure that messed with people’s minds and quietly dragged down cognition, wellbeing, and even the economy across whole generations.

hsuduebc2 6 hours ago | parent | prev | next [-]

This is ridiculously recurring pattern.

AlexandrB 7 hours ago | parent | prev | next [-]

Same as it ever was. You see the same kind of thing is the food industry, pharmaceutical industry, tobacco industry, fossil fuel industry, etc. On the one hand it's almost inevitable. Who (outside of the government) is going to care enough about the results of stuff like this to fund it if not the industry affected? You also often need the industry's help if you're doing anything that involves large sample sizes or some kind of mass production.

On the other hand it puts a big fat question mark over any policy-affecting findings since there's an incentive not to piss off the donors/helpers.

imiric 7 hours ago | parent [-]

The people in these industries are collectively responsible for millions of preventable deaths, and they, their families, and generations of their offspring are and will be living the best lives money can buy.

And yet one person kills a CEO, and they're a terrorist.

terminalshort 7 hours ago | parent | next [-]

Large and complex systems are fundamentally unpredictable and have tradeoffs and consequences that can't be foreseen by anybody. Error rates are never zero. So basically anything large enough is going to kill people in one way or another. There are intelligent ways to deal with this, and then there is shooting the CEO, which will change nothing because the next CEO faces the exact same set of choices and incentives as the last one.

BrenBarn 7 hours ago | parent | next [-]

Well, given what you said, one obvious mechanism is to cap the sizes of these organizations so that any errors are less impactful. Break up every single company into little pieces.

terminalshort 6 hours ago | parent [-]

That doesn't really help because the complexity isn't just internal to the companies, but also exists in the network between entities that make up the industry. I may well even make it worse because it is much harder to coordinate. e.g. If I run into a bug cause by another team at work, it's massively easier to get that fixed than if the bug is in vendor software.

In terms of health insurance, which is the industry where the CEO got shot, we can pretty definitively say that it's worse. More centralized systems in Europe tend to perform better. If you double the number of insurance companies, then you double the number of different systems every hospital has to integrate with.

We see this on the internet too. It's massively more centralized than 20 years ago, and when Cloudflare goes down it's major news. But from a user's perspective the internet is more reliable than ever. It's just that when 1% of users face an outage once a day it gets no attention, but when 100% of users face an outage once a year everyone hears about it even though it is more reliable than the former scenario.

imiric 6 hours ago | parent | prev | next [-]

I'm not talking about unpredictable tradeoffs and consequences.

I'm talking about intentional actions that lead to deaths. E.g. [1] and [2], but there are numerous such examples. There is no plausible defense for this. It is pure evil.

[1]: https://en.wikipedia.org/wiki/Tobacco_Institute

[2]: https://en.wikipedia.org/wiki/Purdue_Pharma

terminalshort 5 hours ago | parent [-]

Well those get handled. Perdue was sued into bankruptcy and the Tobacco Institute was shut down when the industry was forced to settle for $200 billion in damages.

imiric 5 hours ago | parent [-]

So human lives have a price tag, and companies can kill millions for decades as long as they pay for it. Gotcha.

asdff 6 hours ago | parent | prev [-]

Pretty predictable what happens when you deny coverage for a treatment someone needs

terminalshort 6 hours ago | parent | next [-]

But do they need it? How do you know? And don't say because the doctor said so, because doctors disagree all the time. When my grandfather was dying in his late 80s, the doctor said there was nothing he could do. So his children took him to another doctor, who said the same. And then another doctor, who agreed with the first two. But then they took him to a 4th doctor, who agreed to do open heart surgery, which didn't work, and if anything hastened his inevitable death due to the massive stress. The surgery cost something like 70 grand and they eventually got the insurance company to pay for it. But the insurance company should not have paid for it because it was a completely unnecessary waste of money. And of course there will be mistakes in the other direction because this just isn't an exact science.

asdff 6 hours ago | parent [-]

At that point, why cover anything at all if the doctor could always be wrong?

terminalshort 6 hours ago | parent [-]

Stupid question. If you have a better way to make decisions on insurance coverage then state it.

asdff 5 hours ago | parent [-]

Why is it on me to come up with a new model for healthcare? I can acknowledge shortcomings of the present system without having to come up with solutions for them.

latency-guy2 3 hours ago | parent [-]

> Pretty predictable what happens when you deny coverage for a treatment someone needs

Other poster demonstrated that you have no idea what "need" is. So you also have no idea what a "shortcoming of the present system" is either, because how the hell would you even know?

asdff an hour ago | parent [-]

People being denied treatment they need seems like a shortcoming of the present system.

quesera 6 hours ago | parent | prev [-]

It would be a clean and compelling narrative, if Luigi or someone he loved was denied coverage for a necessary treatment!

But that doesn't seem to be true at all. He just had a whole lot of righteous anger, I guess. Gotta be careful with that stuff.

asdff 5 hours ago | parent [-]

Why does it matter if it personally occurred to him or someone related to him? It happens to plenty of people. You can have empathy for people not bound by blood.

quesera 5 hours ago | parent [-]

Of course you can. But where does it stop?

There is a great deal of injustice in the world. Psychologically healthy adults have learned to add a reflection step between anger and action.

By all evidence, Luigi is a smart guy. So one can only speculate on his psychological health, or whether he believed that there was an effective response to the problem which included murdering an abstract impersonal enemy.

I'm stumped, honestly. The simplest explanations are mental illness, or a hero complex (but I repeat myself). Maybe we'll learn someday.

asdff an hour ago | parent [-]

He could die quietly making no impact on the issue. Or he could sacrifice the rest of his free life to put a spotlight on the issue. That is what he chose to do. Not an easy decision I'm sure.

windowpains 7 hours ago | parent | prev [-]

You say “a CEO” like it’s just a fungible human unit. In reality, a CEO is much much more valuable than a median human. Think of how many shareholders are impacted, many little old grey haired grannies, dependent on their investments for food, shelter and medical expenses. When you think of the fuller context, surely you see how sociopathic it is to shrug at the killing of a CEO, let alone a CEO of a major corporation. Or maybe sociopathy is the norm these days, for the heavily online guys.

asdff 6 hours ago | parent | next [-]

The CEO literally is a fungible human unit. Any job can be learned.

terminalshort 6 hours ago | parent [-]

In that case it also accomplishes nothing to kill him because another will just take his place. So either way you lose.

asdff 6 hours ago | parent [-]

A message is certainly sent in the process that previously was going unheard.

"Former UnitedHealth CEO Andrew Witty published an op-ed in The New York Times shortly after the killing, expressing sympathy with public frustrations over the “flawed” healthcare system. The CEO of another insurer called on the industry to rebuild trust with the wider public, writing: “We are sorry, and we can and will be better.”

Mr. Thompson’s death also forced a public reckoning over prior authorization. In June, nearly 50 insurers, including UnitedHealthcare, Aetna, Cigna and Humana, signed a voluntary pledge to streamline prior authorization processes, reduce the number of procedures requiring authorization and ensure all clinical denials are reviewed by medical professionals. "

https://www.beckerspayer.com/payer/one-year-after-ceo-killin...

quesera 6 hours ago | parent | prev [-]

CEOs are not special humans. They know lots of people, but that's not an unusual trait.

When one gets fired, quits, retires, or dies, you get a new one. Pretty fungible, honestly.

But yeah, shooting people is a bad decision in almost all cases.

BrenBarn 7 hours ago | parent | prev | next [-]

But what undisclosed ties might this study itself have?

Braxton1980 4 hours ago | parent | prev [-]

This means their research should be examined in more detail but unless their evidence they are being dishonest in some sense it doesn't invalidate their findings