Remix.run Logo
pixl97 21 hours ago

I'd be careful assuming that is completely true. Image recognition models can/do have their own set of attacks against them that may not be easily noticeable to humans. My first thought on this is injecting noise into images that can be picked up as instructions to the LLM when it decodes the printed page.