There is a simple way to mitigate prompt injection. Just check metadata only: is this action by the LLM suspicious given trusted metadata, blanking out the data