> Asking a chatbot (e.g. vanilla Claude) to summarize an unknown document is not risky, since all it can do is generate text.
Prompt injection in the document itself is a risk to the LLM/You.