| ▲ | snthpy 3 days ago | |
Does anyone ever worry about prompt injection attacks against yourself in these? When I was into hypnosis and NLP between one and two decades ago, I used to worry about what the instructions were once I was hypnotized. I lacked the terminology then but there days we would call these prompt injections, just against the human brain. I guess social engineering is another form, although that's probably more akin to a CSRF or flawed auth logic exploit. | ||