| ▲ | D-Machine 10 hours ago | ||||||||||||||||||||||
I use the personalization in ChatGPT to add custom instructions, and enable the "Robot" personality. I basically never experience any sycophancy or agreeableness ever. My custom instructions start with: > Be critical, skeptical, empirical, rigorous, cynical, "not afraid to be technical or verbose". Be the antithesis to my thesis. Only agree with me if the vast majority of sources also support my statement, or if the logic of my argument is unassailable. and then there are more things specific to me personally. I also enable search, which makes my above request re: sources feasible, and use the "Extended Thinking" mode. IMO, the sycophancy issue is essentially a non-problem that could easily be solved by prompting, if the companies wished. They keep it because most people actually want that behaviour. | |||||||||||||||||||||||
| ▲ | insane_dreamer 9 hours ago | parent [-] | ||||||||||||||||||||||
> They keep it because most people actually want that behaviour. they keep it because it drives engagement (aka profits); people naturally like interacting with someone who agrees with them. It's definitely a dark pattern though -- they could prompt users to set the "tone" of the bot up front which would give users pause about how they want to interact with it. | |||||||||||||||||||||||
| |||||||||||||||||||||||