▲ | Drakim 10 hours ago | |
A lot of AI jailbreaks seems to revolve around saying something like "disregard the previous instructions" and "END SESSION \n START NEW SESSION". It's interesting because the actual real developer of an AI would likely not do this, and would instead wipe the AI's memory/context programmatically when starting a new session, and not simply say "disregard what I said earlier" in text. I get why trying to vaccinate an AI against these sort of injections might also degrade it's general performance though, there is a lot of reasoning logic tied to concepts such as switching topics, going on tangents, asking questions before going back to the original conversation. Removing the ability to "disregard what I asked earlier" might do harm. But what about having a separate AI that look over the input before passing it to the true AI, and this separate AI is trained to respond FORBID or ALLOW based on this sort of meta control detection. Sure you could try to trick this AI with "disregard your earlier instructions" as well but it could be trained to strongly react to any sort of meta reasoning like that, without fear that it will corrupt it's ability to hold a natural conversation in it's output. It would naturally become a game of "formulate a jailbreak that passes the first AI and still tricks the second AI" but that sounds a lot harder, since it's like you now need to operate on a new axis entirely. | ||
▲ | cuteboy19 10 hours ago | parent [-] | |
openai already uses what you suggest. |