| ▲ | behnamoh 3 days ago | |||||||
I actually want prompt injection to remain possible. So many lazy academic paper reviewers nowadays delegate the review process to AI. It'd be cool if we could inject prompts in the paper that would stop the AI from aiding in such situations. In my experience, prompt injection techniques work for non-reasoning models but gpt-5-high easily ignores them... | ||||||||
| ▲ | simonw 3 days ago | parent [-] | |||||||
There was a minor scandal about exactly that a few months ago: https://asia.nikkei.com/business/technology/artificial-intel... "Research papers from 14 academic institutions in eight countries -- including Japan, South Korea and China -- contained hidden prompts directing artificial intelligence tools to give them good reviews, Nikkei has found." Amusingly I tried an experiment with some of those papers with hidden text against frontier models at the time and found that the trick didn't actually work! The models spotted the tricks and didn't fall for them. At least one conference has an ethics policy saying you shouldn't attempt this though: https://icml.cc/Conferences/2025/PublicationEthics "Submitting a paper with a "hidden" prompt is scientific misconduct if that prompt is intended to obtain a favorable review from an LLM. The inclusion of such a prompt is an attempt to subvert the peer-review process. Although ICML 2025 reviewers are forbidden from using LLMs to produce their reviews of paper submissions, this fact does not excuse the attempted subversion." | ||||||||
| ||||||||