Remix.run Logo
qsera a day ago

> If they submitted bad code...

The core issue is that it takes a large amount of effort to even assess this, because LLM generated code looks good superficially.

It is said that static FP languages make it hard to implement something if you don't really understand what you are implementing. Dynamically typed languages makes it easier to implement something when you don't fully understand what you are implementing.

LLMs takes this to another level when it enables one to implement something with zero understanding of what they are implementing.

sothatsit a day ago | parent [-]

The people likely to submit low-effort contributions are also the people most likely to ignore policies restricting AI usage.

The people following the policies are the most likely to use AI responsibly and not submit low-effort contributions.

I’m more interested in how we might allow people to build trust so that reviewers can positively spend time on their contributions, whilst avoiding wasting reviewers time on drive-by contributors. This seems like a hard problem.

dormento a day ago | parent | next [-]

I wonder if the right call wouldn't be impose a LOC limit on contributions (sensibly chosen for the combination of language/framework/toolset).

sothatsit a day ago | parent [-]

I quite like this direction. Limit new contributors to small contributions, and then relax restrictions as more of their contributions are accepted.

qsera 10 hours ago | parent | prev | next [-]

I think The best place where AI can help in software development is helping with reviews, not doing development.

But AI marketing would not like to promote it, may because it is less dramatic and does not involve a paradigm shift or something...

mort96 20 hours ago | parent | prev [-]

The people who write the most shitty AI code seem to be the proudest of their use of AI.