Probably. It’s trivial to plug some obfuscated code into an LLM and ask it what it does.
Yeah, but just imagine how many false positives and false negatives there would be...