Prompt injection just seems unsolvable.
Are there works toward preventing it 100% of the time ? (I would assume the LLMs architectures would have to change)