Remix.run Logo
jmyeet 4 hours ago

I believe social media is on a collision course with an iceberg called Section 230.

Broadly speaking, Section 230 differentiates between publishers and platforms. A platform is like Geocities (back in the day) where the platform provider isn't liable for the content as long as they staisfy certain requirements about havaing processes for taking down content when required. A bit like the Cox decision today, you're broadly not responsible for the actions of people using your service unless your service is explicitly designed for such things.

A publisher (in the Section 230 sense) is like any media outlet. The publisher is liable for their content but they can say what they want, basically. It's why publishers tend to have strict processes around not making defamatory or false statements, etc.

I believe that any site that uses an algorithmic news feed is, legally speaking, a publisher acting like a platform.

Example: let's just say that you, as Twitter, FB, IG or Youtube were suddenly pro-Russian in the Ukraine conflict. You change your algorithm to surface and distribute pro-Russian content and suppress pro-Ukraine content. Or you're pro-Ukrainian and you do the reverse.

How is this different from being a publisher? IMHO it isn't. You've designed your algorithm knowingly to produce a certain result.

I believe that all these platforms will end up being treated like publishers for this reason.

So, with today's ruling about platforms creating addiction, (IMHO) it's no different to surfacing content. You are choosing content to produce a certain outcome. Intentionally getting someone addicted is funtionally no different to changing their views on something.

I actually blame Google for all this because they very successfully sold the idea that "the algorithm" ranks search results like it's some neutral black box but every behavior by an algorithm represents a choice made by humans who created that algorithm.

lokar an hour ago | parent | next [-]

Please read:

https://www.techdirt.com/2020/06/23/hello-youve-been-referre...

jmyeet an hour ago | parent [-]

This is an opinion and I believe it's wrong. And you just have to look at the statute to see why [1]:

> (c) Protection for “Good Samaritan” blocking and screening of offensive material

> (2) Civil liability

> (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

"in good faith" is key here. Here's another opinion [2]:

> One argument advanced by those who want to limit immunity for platforms is that these algorithms are a form of content creation, and should therefore be outside the scope of Section 230 immunity. Under this theory, social media companies could potentially be held liable for harmful consequences related to content otherwise created by a third party.

So far the Supreme Court has sidestepped this issue despite cases making it to the Appeals Court. Until the Supreme Court addresses, none of us can say with any certainty what is and isn't protected.

[1]: https://www.law.cornell.edu/uscode/text/47/230

[2]: https://www.naag.org/attorney-general-journal/the-future-of-...

lokar an hour ago | parent [-]

I don't expect that to work, but who knows. Editors "rank", curate, select, present, etc content to people, and have for a long time, and it's always understood to be speech.

Remember, according to that link, 230 does not give platforms any new rights. It simply makes it easier for them to end cases faster and cheaper, that they would have already won on 1st amendment grounds.

timdev2 3 hours ago | parent | prev [-]

Why do you believe that "Section 230 differentiates between publishers and platforms"?

jmyeet 2 hours ago | parent [-]

Section 230(c)(i) [1]:

> (c) (c)Protection for “Good Samaritan” blocking and screening of offensive material

> (1) Treatment of publisher or speaker

> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

This is a protection for being a platform for third-party (including user-generated) content.

Some more discussion on this distinction [2]:

> Section 230’s legal protections were created to encourage the innovation of the internet by preventing an influx of lawsuits for user content.

It goes on to talk about publishers, distributors and Internet Service Providers, the last of which I characterize as "platforms".

By the way, my view here isn't a fringe view [3]:

> One argument advanced by those who want to limit immunity for platforms is that these algorithms are a form of content creation, and should therefore be outside the scope of Section 230 immunity. Under this theory, social media companies could potentially be held liable for harmful consequences related to content otherwise created by a third party.

This is exactly my view.

[1]: https://www.law.cornell.edu/uscode/text/47/230

[2]: https://bipartisanpolicy.org/article/section-230-online-plat...

[3]: https://www.naag.org/attorney-general-journal/the-future-of-...

Dracophoenix 2 hours ago | parent [-]

This isn't good reasoning. According to your analysis, any website, ISP, or hosting provider that uses a firewall or Cloudflare is by definition a publisher, since they algorithmically shape traffic to prohibit suspicious IP addresses from accessing content.

jmyeet 2 hours ago | parent [-]

Not at all. Intenet matters. Is Cloudfare trying to shape user behavior or push a particular position or content? No.

Just look at the Cox decision from the Supreme Court today. As long as the (Internet) service isn't designed for or sold as a method of downloading copyrighted material, the provider isn't responsible for any actions by its users. In other words, intent matters.

I find that technical people really get stuck on this aspect of the law. They look for technical compliance or an absolute proof standard because we're used to doing things like proving something works mathematically. But the law is subjective and holistic. It looks at the totality of evidence and applies a subjective test.

And intent here is fairly easy to establish. We could take an issue like Russia and look at all the posts and submissions and see how many views and interactions those posts got. We then divide them into pro-Russian and pro-Ukraine and establish a clear bias. We also look at any modifications made to the algorithm to achieve those goals.

This is nothing like Cloudfare DDoS protection.