▲ | simiones 7 days ago | |||||||||||||
The script tag is just a PoC of the capability. The attack vector could obviously be used to "convince" the LLM to do something much more subtle to undermine security, such as recommending code that's vulnerable to SQL injections or that uses weaker cryptographic primitives etc. | ||||||||||||||
▲ | moontear 7 days ago | parent [-] | |||||||||||||
Of course, but this doesn’t undermined the OPs point of „who allows the AI to do stuff without reviewing it“. Even WITHOUT the „vulnerability“ )if we call it that), AI may always create code that may be vulnerable in some way. The vulnerability certainly increases the risk a lot and hence is a risk and also should be addressed in text files showing all characters, but AI code always needs to be reviewed - just as human code. | ||||||||||||||
|