▲ | echelon 5 days ago | ||||||||||||||||
Holy shit. That just made it obvious to me. A "smart" VLM will just read the text and trust it. This is a big deal. I hope those nightshade people don't start doing this. | |||||||||||||||||
▲ | pjc50 5 days ago | parent | next [-] | ||||||||||||||||
> I hope those nightshade people don't start doing this. This will be popular on bluesky; artists want any tools at their disposal to weaponize against the AI which is being used against them. | |||||||||||||||||
| |||||||||||||||||
▲ | koakuma-chan 5 days ago | parent | prev [-] | ||||||||||||||||
I don't think this is any different from an LLM reading text and trusting it. Your system prompt is supposed to be higher priority for the model than whatever it reads from the user or from tool output, and, anyway, you should already assume that the model can use its tools in arbitrary ways that can be malicious. | |||||||||||||||||
|