| ▲ | bornfreddy a day ago | |
This is actually where LLMs could be in advantage. Any code which is not clean (i.e. could be obfuscated) will trigger alarms and deeper inspection. It is much more difficult to create a good "underhanded" exploit that LLM will miss than it is to do the same for humans, imho. | ||
| ▲ | whyever a day ago | parent [-] | |
LLMs are vulnerable to prompt injection attacks, so I'm not sure they are in advantage. | ||