Remix.run Logo
sampullman 5 hours ago

I think there's a little more nuance than that, but it seems roughly correct.

Wouldn't it be better if apps/websites targeting kids didn't use A/B testing to be more addictive?

KaiserPro 4 hours ago | parent | next [-]

I think addiction is a redherring.

Pokemon is addictive, computer games are addictive. Its whether they are knowingly causing harm, and or avoiding attempts to stop that harm.

Zigurd 3 hours ago | parent [-]

Addictive patterns in games and other online activity is a bit less innocent than you are portraying it: knowingly causing harm is too low a standard. A lot of the profitability of online games, prediction markets, etc. comes from the whales. The whales are probably addicted. If your business is a whale hunt you are possibly causing harm at least to the extent that addiction is dangerous.

ramon156 5 hours ago | parent | prev | next [-]

They'd find another method. Why are we allowing this in the first place?

I don't have an answer to fix this whole mess, but it starts with our attitude towards addiction. We've built a system that rewards addiction in all sorts of places. Granted, every addiction is different, and I'm of the opinion that it's not (drug = bad), it's how you use it and react to it. We can control the latter, but we choose to ignore it because we're too busy with anything else. This is a tale as old as time...

greenhearth 2 hours ago | parent | next [-]

"Free market" and "entrepreneur spirit" fetishism and fear of collective social action against individual drives.

aaomidi 4 hours ago | parent | prev | next [-]

In the span of how long it takes for law to catch up to what’s going on, YouTube and Facebook has been around for a tiny amount of time.

bluefirebrand 4 hours ago | parent [-]

They have been around long enough to have done unknowable damage to entire generations of humans

aaomidi 4 hours ago | parent [-]

As usual unfortunately laws are reactive.

ToucanLoucan 4 hours ago | parent | prev [-]

> Why are we allowing this in the first place?

Exactly what I keep coming back to.

For me, it feels like you could cut this problem down substantially by eliminating section 230 protection on any algorithmically elevated content. Everywhere. Full stop.

If you write or have an algorithm created that pushes content to users, in ANY fashion, that is endorsement. You want that content to be seen, for whatever odd reason, and if it's harmful to your users, you should be held responsible for it. It's one thing if some random asshole messages me on Telegram trying to scam me; there's little Telegram can do (though a fucking "do not permit messages from people not in my contacts" setting would be nice) but there is nothing at all that "makes" Facebook shovel AI bullshit at people, apart from it juices engagement, either by genuine engagement or ironic/ragebaiting.

And AI bullshit is just annoying, I've seen "Facebook help" groups that are clearly just trawling to get people's account info, I've seen scam pages and products, all kinds of shit, and either it pisses people off so Facebook passes it around, or they give Facebook money and Facebook shoves it into the feeds of everyone they can.

It's fucking disgusting and there's no reason to permit it.

pjc50 2 hours ago | parent | next [-]

> If you write or have an algorithm created that pushes content to users, in ANY fashion, that is endorsement

Yes. People make free speech arguments about this, but the list and order of stuff returned by algorithmic non-directed (+) lists is clearly a form of endorsement. Even more so is advertising, which undergoes a bidding process. Pages which show ads should be liable if those ads are fraudulent, especially if they're so obviously fraudulent that casual readers suspect them immediately.

(+) Returning a list of stuff in a user-specified query, on the other hand, is not endorsement. Chronological or alphabetical order or distance-based or even random is fine.

Note that section 230 is, of course, US specific and other countries manage without it.

Terr_ an hour ago | parent | prev | next [-]

> algorithmically elevated

I don't see a good way to make a definite legal distinction between the icky stuff versus normal an unobjectionable things which are, technically, also forms of elevation-by-algorithm:

    rank_by_age(items) // Good
    rank_by_age_and_poster_reputation(items) // Probably   
    rank_by_on_topic_ness(items, forum_subject)
    rank_by_likes(items)
    rank_by_engagement_likelihood(items) // Bad?
    rank_by_positive_sentiment_toward_clients(items) // Bad
SpicyLemonZest 3 hours ago | parent | prev [-]

Eliminating section 230 protections would heavily disfavor any kind of intellectually stimulating content, because it's hard for a platform to scalably verify that nobody's making defamatory claims. But pointless clickbait, heavily filtered Instagram models, etc. don't really have liability concerns on a video-by-video level. To me it seems like this makes the problem worse.

roughly 3 hours ago | parent [-]

It’s not eliminating section 230 entirely, it’s eliminating it for algorithmically promoted content. If you have a site that has user content and you present that content in a neutral fashion, section 230 applies. If you pick and choose what content to present to users (manually or by algorithm), you’re no longer a neutral platform, and shouldn’t be getting the benefit of 230.

SpicyLemonZest 2 hours ago | parent [-]

I understand that. My point is that this would mean algorithmic feeds can only contain vapid, pointless content with no liability concerns. To me, it doesn't improve the world to require that Instagram and Youtube exclusively serve slop, even if that might cause some number of people to abandon them for non-algorithmic platforms with better content.

ToucanLoucan an hour ago | parent [-]

Literally every social media site I'm aware of has had, in varying strengths and at varying times, many still currently, a movement among users asking for a fucking chronological ordered feed. Just, what the fuck my friends are saying, in the reverse order that they said it, displayed in a list.

Not only is this seemingly the most desired feed among end users, it was also the default one. MySpace didn't have a choice in the matter, they had to show a chronological timeline, because they didn't have a machine-learning algorithm nor a way to make one. They could tweak it based on engagement metrics but on the whole, it was just here's what all your friends have posted, in reverse order, scroll away. And then eventually you'd hit the end where it's like "you're up to date" and then you go on with your fucking day.

But of course platforms hate that. They want you there, all day, scrolling through an infinite deluge of bullshit, amongst which they can park ads. And we know they hate this, because not only have platforms refused to bring back chronological feeds, they actively removed them if they existed at one time. Not only is this doable, it's the most efficient way that requires the least compute from their servers, but platforms reliably chose the inverse... because it makes them more money.

Also specifically on this:

> My point is that this would mean algorithmic feeds can only contain vapid, pointless content

The vast majority of these sites is vapid, pointless content RIGHT NOW, even if it attempts to convince you it isn't.

SpicyLemonZest 33 minutes ago | parent [-]

Literally every social media site I'm aware of has a chronological ordered feed of people you've chosen to follow. Facebook does, Instagram does, Youtube does. It's just not the homepage, and most people don't care enough about what feed they get to go navigate to it every time they open the app. Would it be nice to make them let you put it on the homepage? Sure, I'd support that.

schmidtleonard 5 hours ago | parent | prev | next [-]

> more nuance

Not enough to diffuse liability. 15 years ago when recommender algorithms were the new hotness, I saw every single group of students introduced to the idea immediately grasp the implication that the endgame would involve pandering to base instincts. If someone didn't understand this, it's because

> It is difficult to get a man to understand something, when his salary depends on his not understanding it. - Upton Sinclair

steve-atx-7600 5 hours ago | parent | prev [-]

For context, facebook is so dystopian when I login once every few years that I’m not sure I’ll ever use it again. And, I hate wading through the YouTube cesspool to find some educational content I like. But, I don’t think it makes sense to ban a/b testing or optimization in general. Some company could use it, for example, to figure out how to teach math to kids in a way that’s as engaging as possible. This would be “more addictive” technically.

sampullman 4 hours ago | parent [-]

That's a good point, I'm not 100% sure it's worth throwing away the potentially beneficial uses. There might not be a solution that's both feasible to implement and avoids banning useful things. In the end I usually come back to it being the parent's responsibility to monitor usage, limit screen time, etc., but it hasn't been working so well in practice.