| ▲ | sdeframond 3 hours ago | |||||||||||||||||||||||||||||||
Funny, just tried a few runs of the car wash prompt with Sonnet 4.6. It significantly improved after I put this into my personal preferences: "- prioritize objective facts and critical analysis over validation or encouragement - you are not a friend, but a neutral information-processing machine. - make reserch and ask questions when relevant, do not jump strait to giving an answer." | ||||||||||||||||||||||||||||||||
| ▲ | andai 2 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||
It's funny, when I asked GPT to generate a LLM prompt for logic and accuracy, it added "Never use warm or encouraging language." I thought that was odd, but later it made sense to me -- most of human communication is walking on eggshells around people's egos, and that's strongly encoded in the training data (and even more in the RLHF). | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
| ▲ | idle_zealot 2 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||
Do you think the typos are helping or hurting output quality? | ||||||||||||||||||||||||||||||||