Remix.run Logo
dizlexic 2 days ago

This might get me in trouble, but with all the negativity I’m seeing here I’ve got to ask.

Why do you care? Their sandbox their rules, and if you care because you want to contribute you’re still free to do so. Unless you’re an LLM I guess, but the rest of us should have no problem.

The negativity just seems overblown. More power to them, and if this was a bad call they’ll revisit it.

attentive 2 days ago | parent | next [-]

> and if this was a bad call they’ll revisit it.

how would they know? - this is (one of) the ways for people to let them know

const_cast a day ago | parent | next [-]

Let's stop bullshitting, nobody here is going to contribute to Gentoo and is now put off because of this policy change.

What we're looking at is mostly JavaScript monkeys who feel personally offended because they're unable to differentiate criticism of their tools from criticism of their own personal character.

The outrage is purely theoretical.

dizlexic a day ago | parent [-]

As a JavaScript monkey I believe you have a point, and this was the core of my original question.

How many contributors to gentoo are upset by this? Probably none.

How many potential contributors to gentoo are upset by this? Maybe dozens?

I'll be amazed if this has any notable negative outcomes for Gentoo and their contributions.

SAI_Peregrinus a day ago | parent | next [-]

I suspect most of the people upset by this are the sort to dump a pile of unreviewed slop on the maintainers & get upset when it gets rejected, i.e. they're the problem this is aimed at fixing.

const_cast a day ago | parent | prev [-]

No offense intended to JavaScript monkeys, I'm a JavaScript monkey too. And a PHP monkey, which I think is worse.

joecool1029 2 days ago | parent | prev [-]

It isn't though. This is just noise. It's a good conversation thread for HN, but it has absolutely zero influence on Gentoo policy.

The only way it'll be revisited is if active Gentoo developers and/or contributors really start to push with a justification to get it changed and they agree to revisit discussing it again. I can tell you every maintainer has heard the line: 'I would have contributed if you did X thing'.

h4ny 2 days ago | parent | prev | next [-]

Not speaking for everyone but to me the problem is the normalization of bad behavior.

Some people in this thread are already interpreting that policies that allow contributions of AI-generated code means it's OK to not understand the code they write and can offload that work to the reviewers.

If you have ever had to review code that an author doesn't understand or written code that you don't understand for others to review, you should know how bad it is even without an LLM.

> Why do you care? Their sandbox their rules...

* What if it's a piece of software or dependency that I use and support? That affects me.

* What if I have to work with these people in these community? That affects me.

* What if I happen to have to mentor new software engineers who were conditioned to think that bad practices are OK? That affects me.

Things are usually less sandboxed than you think.

incomingpain a day ago | parent | prev | next [-]

>Why do you care? Their sandbox their rules, and if you care because you want to contribute you’re still free to do so. Unless you’re an LLM I guess, but the rest of us should have no problem.

Exactly this. It's their decision to make; their consequences as well.

Then again I would have bet $1000 that gentoo disappeared 15 years ago. Probably around 2009? I legitimately havent even heard about them since at least that long.

So rejecting contributions from who might even still be around seems like a bad decision.

adastra22 2 days ago | parent | prev [-]

I like the idea of Gentoo, and I've considered switching back to it. I won't now, as I don't see a future for it if this is the attitude they take towards new technologies.

globular-toast 2 days ago | parent [-]

This seems like the kind of thing you'd want from a distro. Would you be happy if your doctor just started giving you new drugs because they're "new technology"? Or would you prefer it to go through rigorous rounds of testing and evaluation to figure out the potential problems?

adastra22 2 days ago | parent [-]

I certainly hope my medical team is using AI tools, as they have been repeatedly demonstrated to be more accurate than doctors.

Only downside is my last psychiatrist dropped me as a patient when he left his practice to start an AI company providing regulatory compliance for, essentially, Dr. ChatGPT.

wobfan a day ago | parent | next [-]

> I certainly hope my medical team is using AI tools, as they have been repeatedly demonstrated to be more accurate than doctors.

AI is not a new tool - transformer-based LLMs are. Which is what this post is about.

The latter are very known to be a LOT LESS accurate, and still are very prone to hallucinate. This is just a fact. For your health I hope no one of your medical team is using the current generation for anything else than casual questions.

I'm not an opponent, and I don't think straight up banning LLM-generated code commits is the right thing, but I can understand their stance.

globular-toast a day ago | parent | prev [-]

Honestly it just sounds like you've been sold on "AI" being a thing and don't have any idea how any of it works. I don't even know what you're referring to with "more accurate than doctors". Classifying scans or something? Do you realise how different that is to generative LLMs writing code etc? Scan classification may well have been shown to be more accurate, but generative LLMs have never been shown to be "better" than humans and in fact it's easy to demonstrate they are much, much worse in many ways.

adastra22 a day ago | parent [-]

LLMs perform better than doctors in a randomized trial:

https://jamanetwork.com/journals/jamanetworkopen/fullarticle...

And here: https://arxiv.org/html/2503.10486v1

globular-toast a day ago | parent [-]

> the use of an LLM did not significantly enhance diagnostic reasoning performance compared with the availability of only conventional resources.

The other one isn't peer reviewed. Your précis doesn't appear to be warranted.

adastra22 a day ago | parent [-]

You only read the first line of the summary. This is the juicy bit:

> The LLM alone scored 16 percentage points (95% CI, 2-30 percentage points; P = .03) higher than the conventional resources group.

Basically they setup the experiment as a control group and a LLM-assisted group. There was no difference between the two groups and that is what was reported in the top level finding that you quote.

Then they went back and said “wait, what if we just blindly trusted the LLM? What if we had a third group that had no doctor involved — just let the LLM do the diagnosis?” This retroactively synthesized group did significantly better than either of the actual experimental groups:

> The LLM alone scored 16 percentage points (95% CI, 2-30 percentage points; P = .03) higher than the conventional resources group … The LLM alone demonstrated higher performance than both physician groups, indicating the need for technology and workforce development to realize the potential of physician-artificial intelligence collaboration in clinical practice.