Remix.run Logo
arjunbajaj 4 hours ago

I can see this becoming a pretty generally accepted AI usage policy. Very balanced.

Covers most of the points I'm sure many of us have experienced here while developing with AI. Most importantly, AI generated code does not substitute human thinking, testing, and clean up/rewrite.

On that last point, whenever I've gotten Codex to generate a substantial feature, usually I've had to rewrite a lot of the code to make it more compact even if it is correct. Adding indirection where it does not make sense is a big issue I've noticed LLMs make.

fzaninotto 18 minutes ago | parent | next [-]

I agree this could be a template that services like GitHub should propose, the same way as they suggest contributing and code of conduct templates.

imiric 4 hours ago | parent | prev [-]

I agree with you on the policy being balanced.

However:

> AI generated code does not substitute human thinking, testing, and clean up/rewrite.

Isn't that the end goal of these tools and companies producing them?

According to the marketing[1], the tools are already "smarter than people in many ways". If that is the case, what are these "ways", and why should we trust a human to do a better job at them? If these "ways" keep expanding, which most proponents of this technology believe will happen, then the end state is that the tools are smarter than people at everything, and we shouldn't trust humans to do anything.

Now, clearly, we're not there yet, but where the line is drawn today is extremely fuzzy, and mostly based on opinion. The wildly different narratives around this tech certainly don't help.

[1]: https://blog.samaltman.com/the-gentle-singularity

nicoburns an hour ago | parent | next [-]

> Isn't that the end goal of these tools and companies producing them?

It seems to be the goal. But they seem very far away from achieving that goal.

One thing you probably account for is that most of the proponents of these technologies are trying to sell you something. Doesn't mean that there is no value to these tools, but the wild claims about the capabilities of the tools are just that.

Terretta 3 hours ago | parent | prev | next [-]

Intern generated code does not substitute for tech lead thinking, testing, and clean up/rewrite.

imiric 2 hours ago | parent [-]

No, the code is generated by a tool that's "smarter than people in many ways". So which parts of "thinking, testing, and clean up/rewrite" can we trust it with?

TeMPOraL an hour ago | parent | next [-]

Trust is a function of responsibility, not of smarts.

You may hire a genius developer that's better than you at everything, and you still won't trust them blindly with work you are responsible for. In fact, the smarter they are than you, the less trusting you can afford to be.

cmsj 2 hours ago | parent | prev | next [-]

The marketing is irrelevant. The AIs are not aware of what they are doing, or motivated in the ways humans are.

phanimahesh an hour ago | parent | prev [-]

Very little, until it stops being stupid in many ways. We don't need smart, we need tools to not be stupid. An unreliable tool is more dangerous and more useless than having no tool.

sjajshha 23 minutes ago | parent | prev [-]

[dead]