Remix.run Logo
mw888 3 hours ago

Here are the actual policies, not a comment:

https://github.com/jyn514/rust-forge/blob/llm-policy/src/pol...

It's in-line with the 'nanny' stereotype of the Rust community that they give you permission to act in a way they would never be able to verify anyways:

> The following are allowed. > Asking an LLM questions about an existing codebase. > Asking an LLM to summarize comments on an issue, PR, or RFC...

Like seriously, what's the point of explicitly allowing this? Imagine the opposite were true, you weren't allowed to do this - what would they do? Revert an update because the person later claimed they checked it with an LLM?

The Linux policy on this is much superior and more sensible.

MaulingMonkey 2 hours ago | parent | next [-]

> Like seriously, what's the point of explicitly allowing this?

Explicit permission can be useful to preemptively cut off some questions from well meaning people who, acting in good faith, might otherwise pester for clarification (no matter how silly / "obvious" it might otherwise be), or get agitated by misconstruing an all-banned list as being an overly verbose "no LLMs ever" overreach.

> It's in-line with the 'nanny' stereotype of the Rust community that they give you permission to act in a way they would never be able to verify anyways: [...]

Many of us work or have worked in corporate settings where IT takes great pains to help detect and prevent data exfiltration, and have absolutely installed the corporate spyware to detect those kinds of actions when performed on their own closed source codebases. Others rely on the honor system - at least as far as you know - but still ban such actions out of copyright/trade secret concerns. If you're steeped deeply enough in that NDA-preserving culture, a reminder that you've switched contexts might help when common sense proves uncommon.

While nannying can be obnoxious, I'm not sure that having a document one can point to/link/cite, to allay any raised concerns, counts.

vintermann 2 hours ago | parent | prev | next [-]

> Like seriously, what's the point of explicitly allowing this?

I would have LOVED if the university course I took last winter had this. I had to take a very paranoid attitude to what was allowed.

What they're trying to avoid is a lot of unnecessary conflict with zealous anti-AI people calling for your exclusion for admitting to doing these things. There are people who would ban this too.

davesque 4 minutes ago | parent [-]

So then the Rust maintainers are going to give you an F on your report card?

kouteiheika 3 hours ago | parent | prev [-]

> Like seriously, what's the point of explicitly allowing this? Imagine the opposite were true, you weren't allowed to do this - what would they do?

Imagine if they just say "LLMs are banned" then there's a lot of ambiguity. So they specifically outlined that generative uses of LLMs are banned, and that non-generative ones are not banned (i.e. "allowed").

I think it's a poor choice of words on their part, but it makes sense (considering what their policy is). It's more of a "we're not disallowing use in these particular scenarios, so you can still use LLMs for these if you want". Remember: it's a big project, and if they don't explicitly state something then people will ask and waste everyone's time.

saghm 2 hours ago | parent [-]

If anything, it reads to me as a proactive rebuttal of complaints that they don't allow LLMs; they're definitively stating that they do allow using them for very specific purposes.