Remix.run Logo
johnsillings a day ago

That's a great question + something we've discussed internally a bit. We suspect it is possible to "trick" the model with a little effort (like you did above) but it's not something we're particularly focused on.

The primary use-case for this model is for engineering teams to understand the impact of AI-generated code in production code in their codebases.

mendeza a day ago | parent | next [-]

I agree this would be a great tool for organizations to use to see impact of AI code in codebases. Engineers will probably be too lazy to modify the code enough to make it look less AI. You could probably enhance the robustness of your classifier with synthetic data like this.

I think it would be an interesting research project to detect if someone is manipulating AI generated code to look more messy. This paper https://arxiv.org/pdf/2303.11156 Sadasivan et. al. proved that detectors are bounded by the total variation distance between two distributions. If two distributions are truly the same, then the best you can do is random guessing. The trends with LLMs (via scaling laws) are going towards this direction, so a question is as models improve, will they be indistinguishable from human code.

Be fun to collaborate!

runako a day ago | parent | prev [-]

The primary point of distinction that allows AI generation to be inferred appears to be that the code is clean and well-structured. (Leave aside for a moment the oddity that this is all machines whose primary benchmarks are human-generated code written in a style that is now deemed too perfect to have been written by people.)

Does that provide an incentive for people writing manually to write worse code, structured badly, as proof that they didn't use AI to generate their code?

Is there now a disincentive for writing good code with good comments?