Remix.run Logo
gmerc an hour ago

This betrays a lack of understanding how inference works. You cannot categorically defeat prompt injection with instructions. It does not work. There are no privileged tokens.

lmeyerov an hour ago | parent [-]

Yep! One of my favorite attacks is just having a very long piece of a text so the LLM becomes unclear what's important and is happy to do something else