| ▲ | therobots927 a day ago | |
You could literally ask the LLM to obfuscate it and I bet it would do a pretty good job. Good luck parsing 1,000 lines of code manually to identify an exploit that you’re not even specifically looking for. | ||
| ▲ | lazide a day ago | parent [-] | |
Yup, add in some poetic prompt injection….. | ||