Remix.run Logo
danpalmer 2 days ago

This is a prime example of poor AI policy. It doesn't define what AI is – is using Google translate in order to engage on their mailing lists allowed? Is using Intellisense-like tools that we've had for decades allowed? The rationale is also poor, citing concerns that can be applied far more widely than just LLMs. The ethical concerns are pretty hand-wavy, I'm pretty sure email is used to empower spam and yet I suspect Gentoo have no problem using email.

The end result is not necessarily a bad one, and I think reasonable for a project like Gentoo to go for, but the policy could be stated in a much better way.

For example: thou shalt only contribute code that is unencumbered by copyright issues, contributions must be of a high quality and repeated attempts to submit poor quality contributions may result in new contributions not being reviewed/accepted. As for the ethical concerns, they could just take a position by buying infrastructure from companies that align with their ethics, or not accepting corporate donations (time or money) from companies that they disagree with.

Spivak 2 days ago | parent | next [-]

Or because this is a policy by and for human adults who all understand what we're talking about you just don't accept contributions from anyone obviously rule-lawyering in bad faith.

This isn't a court system, anyone intentionally trying to test the boundaries probably isn't someone you want to bother with in the first place.

danpalmer 2 days ago | parent [-]

This policy being so specific in what it bans means that you can't enforce it easily against people who are close but technically within the letter of the policy, and you create a grey area and friction for those who are meeting the spirit of the policy in good faith, but technically in violation.

I have friends and colleagues who I trust as good engineers who take different positions on this (letter vs spirit) and I think there are good faith contributions negatively impacted by both sides of this.

dmead 2 days ago | parent | prev [-]

> It doesn't define what AI is

this is a bad faith comment.

malfist 2 days ago | parent | next [-]

The whole argument smacks of bad faith "yet you participate in society" arguments.

danpalmer 2 days ago | parent | prev [-]

Honestly, I tried to make this in good faith. The examples I gave were perhaps extreme, but my point is that AI is a moving target. Today it means specifically generative AI done by large models – usually not classification, recommendations, and usually not "small" models, all of which have been normalised. LLMs are becoming normalised, and policy needs to be able to keep up to the shifting technological landscape.

Defining policy on the outcomes, rather than the inputs, makes it more resilient and ultimately more effective. Defining policy on the inputs is easy to dismantle.