| ▲ | mastermage 8 hours ago | |
The more interesting question I have is if such Prompt Injection Attacks can ever be actualy avoided, with how GenAI works. | ||
| ▲ | PurpleRamen 3 hours ago | parent | next [-] | |
Removing the risk for most jobs should be possible. Just build the same cages other apps already have. Also add a bit more transparency, so people know better what the machine is doing, maybe even with a mandatory user-acknowledge for potential problematic stuff, similar to how we have root-access-dialogues now. I mean, you don't really need access to all data, when you are just setting a clock, or playing music. | ||
| ▲ | Ono-Sendai 4 hours ago | parent | prev | next [-] | |
They could be if models were trained properly, with more carefully delineated prompts. | ||
| ▲ | larodi 8 hours ago | parent | prev [-] | |
Perhaps not, and it is indeed not unwise from Apple to stay away for a while given their ultra-focus on security. | ||