| ▲ | codebje 2 hours ago | ||||||||||||||||
Being fooled into thinking data is instruction isn't the same as being unable to distinguish them in the first place, and being coerced or convinced to bypass rules that are still known to be rules I think remains uniquely human. | |||||||||||||||||
| ▲ | TeMPOraL 2 hours ago | parent | next [-] | ||||||||||||||||
> and being coerced or convinced to bypass rules that are still known to be rules I think remains uniquely human. This is literally what "prompt injection" is. The sooner people understand this, the sooner they'll stop wasting time trying to fix a "bug" that's actually the flip side of the very reason they're using LLMs in the first place. | |||||||||||||||||
| ▲ | vidarh 2 hours ago | parent | prev | next [-] | ||||||||||||||||
This makes no sense to me. Being fooled into thinking data is instruction is exactly evidence of an inability to reliably distinguish them. And being coerced or convinced to bypass rules is exactly what prompt injection is, and very much not uniquely human any more. | |||||||||||||||||
| |||||||||||||||||
| ▲ | PunchyHamster an hour ago | parent | prev [-] | ||||||||||||||||
the second leads to first, in case you still don't realize | |||||||||||||||||