Remix.run Logo
nerdsniper 3 days ago

Generally, in a cognitive context it's only possible to "do thing" or "do other thing". Even for mammals, it's much harder to "don't/not do thing" (cognitively). One of my biggest advice for people is if there's some habit/repeated behavior they want to stop doing, it's generally not effective (for a lot of people) to tell yourself "don't do that anymore!" and much, much more effective to tell yourself what you should do instead.

This also applies to dogs. A lot of people keep trying to tell their dog "stop" or "dont do that", but really its so much more effective to train your dog what they should be doing instead of that thing.

It's very interesting to me that this also seems to apply to LLMs. I'm a big skeptic in general, so I keep an open mind and assume that there's a different mechanism at play rather than conclude that LLM's are "thinking like humans". It's still interesting in its own context though!

ewoodrich 3 days ago | parent | next [-]

And yet, despite this being a frequently recommended pro tip these days, neither OpenAI nor Anthropic seem to shy away from using "do not" / "does not" in their system prompts. By my quick count, 20+ negative commands in Anthropic's (official) Opus system prompt and 15+ in OpenAI's (purported) GPT-5 system prompt. Of course there are a lot of positive directions as well but OpenAI in particular still seems to rely on a lot of ALL CAPS and *emphasis*.

https://docs.anthropic.com/en/release-notes/system-prompts#a...

https://www.reddit.com/r/PromptEngineering/comments/1mknun8/...

DiscourseFan 2 days ago | parent | prev [-]

The LLMs function very close to Freud’s theory of the unconscious—they do not say “no,” every token is connected to every other in some strange pattern that we can’t fully comprehend.