We are anthropomorphizing whenever we refer to prompts as instructions to models. They predict text not obey our orders.
That’s not how language works, just how engineers think it works