| ▲ | Nevermark 4 hours ago | |
> Nor is it an argument that companies can’t do better jobs within their own content moderation efforts. But I do think there’s a huge problem in that many people — including many politicians and journalists — seem to expect that these companies not only can, but should, strive for a level of content moderation that is simply impossible to reach. The three problems I see are: 1. People who imagine content moderation prohibitions would be a utopia. 2. People who imagine content moderation should be perfect (of course by which I mean there own practical, acknowledged imperfect measure. Because even if everyone is pro-practicality, if they are pro-practicality in different ways, we still get an impossible demand.) 3. This major problem/disconnect I just don't ever see discussed: (This would solve harms in a way that the false dichotomy of (1) and (2) do not.) a) If a company is actively promoting some content over others, for any reason (a free speech exercise, that allows for many motives here), they should be held to a MUCH higher standard for their active choices, vs. neutral providers, with regard to harms. b) If a company is selectively financially underwriting content creation, i.e paying for content by any metric (again, a free speech exercise, that allows for many motives), they should be held to be a MUCH higher standard, for their financed/rewarded content, vs. content it sources without financial incentive, with regard to harms. Host harbor protections should be for content made available on a neutral content producer, consumer search/selection basis. As soon as a company is injecting their own free speech choices (by preferentially selecting content for users, or paying for selected content), much higher responsibilities should be applied. A neutral content site can still make money many ways. Advertising still works. Pay for content on an even basis, but providing only organic (user driven) discovery, etc. One such a neutral utility basis, safe harbor protection regarding content (assuming some reasonable means of responding to reports of harmful material), makes sense. Safe harbors do not make sense for services who use their free speech freedoms to actively direct users to service preferred content, or actively financing service preferred content. Independent of preferred (i.e. the responsibility that is applied, should continue to be neutral itself. The nature of the companies free speech choices should not be the issue.) Imposed selection, selective production => speech => responsibility. Almost all the systematic harms by major content/social sites, can be traced to perverse incentives actively pursued by the site. This rule should apply: Active Choices => Responsibility for Choices. Vs. Neutrality => Responsible Safe Harbor. This isn't a polemic against opinionated or hands-on content moderators. We need them. We need to allow them, so we have those rights to. It is a polemic against de-linking free speech utilization, from free speech responsibility. And especially against de-linking that ethical balance at scale. | ||