| ▲ | space_fountain 6 hours ago | |
I'm not sure that a prompt injection secure LLM is even possible anymore than a human that isn't susceptible to social engineering can exist. The issues right now are that LLMs are much more trusting than humans, and that one strategy works on a whole host of instances of the model | ||
| ▲ | chrisjj 6 hours ago | parent [-] | |
Indeed. When up against a real intelligent attacker, LLM faux intelligence fares far worse than dumb. | ||