▲ | pessimizer 6 days ago | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
I'm loving and being astonished by every moment of working with these machines, but to me they're still talking lamps. I don't need them to cater to my ego, I'm not that fragile and the lamp's opinion is not going to cheer me up. I just want it to do what I ask. Which it is very good at. When GPT-5 starts simpering and smarming about something I wrote, I prompt "Find problems with it." "Find problems with it." "Write a bad review of it in the style of NYRB." "Find problems with it." "Pay more attention to the beginning." "Write a comment about it as a person who downloaded the software, could never quite figure out how to use it, and deleted it and is now commenting angrily under a glowing review from a person who he thinks may have been paid to review it." Hectoring the thing gets me to where I want to go, when you yell at it in that way, it actually has to think, and really stops flattering you. "Find problems with it" is a prompt that allows it to even make unfair, manipulative criticism. It's like bugspray for smarm. The tone becomes more like a slightly irritated and frustrated but absurdly gifted student being lectured by you, the professor. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | devin 6 days ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
There is no prompt which causes an LLM to "think". | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|