There's a way - inject garbage prompts, like in the content meant to be the example - humans might understand that this is in an "example" context, but LLMs are likely to fail as prompt injection is an unsolved problem.