Remix.run Logo
JaggedJax 7 hours ago

I'm not sure when this policy was introduced, but fairly recently Jellyfin released a pretty major update that introduced a lot of bugs and performance issues. I've been watching their issue tracker as they work through them and have noticed it's flooded with LLM generated PRs and obviously LLM generated PR comments/descriptions/replies. A lot of the LLM generated PRs are a mishmash of 2-8 different issues all jumbled into a single PR.

I can see how frustrating it is to wade through those and they are distracting and taking time away from them actually getting things fixed up.

djbon2112 3 hours ago | parent | next [-]

We've had these thoughts for a while, especially relating to clients, but that is exactly what prompted this - a huge number of pure-vibe-coded "fixes performance" PRs that have been a nightmare to wade through.

bjackman 7 hours ago | parent | prev | next [-]

I have lately taken to this approach when I raise bugs:

1. Fully human-written explanation of the issue with all the info I can add

2. As an attachment to the bug (not a PR), explicitly noted as such, an AI slop fix and a note that it makes my symptom go away.

I've been on the receiving end of one bug report in this format and I thought it was pretty helpful. Even though the AI fix was garbage, the fact that the patch made the bug go away was useful signal.

Gigachad 7 hours ago | parent | prev [-]

The open for anyone PR model might be at risk now. How can maintainers be expected to review unlimited slop coming in. I can see a lot of open source just giving up on allowing community contribution. Or maybe only allowing trusted members to contribute after they have demonstrated more than passing interest in the project.

pixl97 5 hours ago | parent [-]

It has been at risk for a long time, now it is in doubt.

Think of a scenario like

Attacker floods you with tons of AI slop to make your overloaded and at risk of making mistakes. These entries should have just enough basis in reality to avoid summary rejection.

Then the attacker puts in useful batch of code that fixes issues and injects a tricky security flaw.

If there's not a lot going on the second part is hard to pull off. But if you ruin the SnR it becomes more likely.

fn-mote 3 hours ago | parent [-]

That's not going to be the scenario (IMO). After the AI slop comes in, everything in the queue is going to be triaged as garbage to clear it.

pixl97 3 hours ago | parent [-]

The attacker never has to stop.