Has anyone trained an LLM with separate channels for "priority instructions" and ordinary user interactions? Seems like that could go a long way to prevent jailbreaking...