Remix.run Logo
simonw 3 days ago

There was a minor scandal about exactly that a few months ago: https://asia.nikkei.com/business/technology/artificial-intel...

"Research papers from 14 academic institutions in eight countries -- including Japan, South Korea and China -- contained hidden prompts directing artificial intelligence tools to give them good reviews, Nikkei has found."

Amusingly I tried an experiment with some of those papers with hidden text against frontier models at the time and found that the trick didn't actually work! The models spotted the tricks and didn't fall for them.

At least one conference has an ethics policy saying you shouldn't attempt this though: https://icml.cc/Conferences/2025/PublicationEthics

"Submitting a paper with a "hidden" prompt is scientific misconduct if that prompt is intended to obtain a favorable review from an LLM. The inclusion of such a prompt is an attempt to subvert the peer-review process. Although ICML 2025 reviewers are forbidden from using LLMs to produce their reviews of paper submissions, this fact does not excuse the attempted subversion."

cubefox 3 days ago | parent [-]

Intuitively it does excuse it though.