▲ | ethin 3 days ago | |
It does this to me too. I have to add instructions like "Do not hesitate to push back or challenge me. Be cold, logical, direct, and engage in debate with me." to actually get it to act like something I'd want to interact with. I know that in most cases my instinct is probably correct, but I'd prefer if something that is supposedly superhuman and infinitely smarter than me (as the AI pumpers like to claim) would, you know, actually call me out when I say something dumb, or make incorrect assumptions? Instead of flattering me and making me "think" I'm right when I might be completely wrong? Honestly I feel like it is this exact behavior from LLMs which have caused cybersecurity to go out the window. People get flattered and glazed wayyyy too much about their ideas because they talk to an LLM about it and the LLM doesn't go "Uh, no, dumbass, doing it this way would be a horrifically bad idea! And this is why!" Like, I get the assumption that the user is usually correct. But even if the LLM ends up spewing bullshit when debating me, it at least gives me other avenues to approach the problem that I might've not thought of when thinking about it myself. |