| ▲ | pshc 4 hours ago | |
I was daydreaming of a special LLM setup wherein each token of the vocabulary appears twice. Half the token IDs are reserved for trusted, indisputable sentences (coloured red in the UI), and the other half of the IDs are untrusted. Effectively system instructions and server-side prompts are red, whereas user input is normal text. It would have to be trained from scratch on a meticulous corpus which never crosses the line. I wonder if the resulting model would be easier to guide and less susceptible to prompt injection. | ||
| ▲ | tempaccsoz5 2 hours ago | parent [-] | |
Even if you don't fully retrain, you could get what's likely a pretty good safety improvement. Honestly, I'm a bit surprised the main AI labs aren't doing this You could just include an extra single bit with each token that represents trusted or untrusted. Add an extra RL pass to enforce it. | ||