▲ | mendeza a day ago | |
I agree this would be a great tool for organizations to use to see impact of AI code in codebases. Engineers will probably be too lazy to modify the code enough to make it look less AI. You could probably enhance the robustness of your classifier with synthetic data like this. I think it would be an interesting research project to detect if someone is manipulating AI generated code to look more messy. This paper https://arxiv.org/pdf/2303.11156 Sadasivan et. al. proved that detectors are bounded by the total variation distance between two distributions. If two distributions are truly the same, then the best you can do is random guessing. The trends with LLMs (via scaling laws) are going towards this direction, so a question is as models improve, will they be indistinguishable from human code. Be fun to collaborate! |