| ▲ | ericpauley 5 hours ago | ||||||||||||||||
This is an oft-repeated meme, but I’m convinced the people saying it are either blindly repeating it, using bad models/system prompts, or some other issue. Claude Opus will absolutely push back if you disagree. I routinely push back on Claude only to discover on further evaluation that the model was correct. As a test I just did exactly what you said in a Claude Opus 4.6 session about another HN thread. Claude considered* the contradiction, evaluated additional sources, and responded backing up its original claim with more evidence. I will add that I use a system prompt that explicitly discourages sycophancy, but this is a single sentence expression of preference and not an indication of fundamental model weakness. * I’ll leave the anthropomorphism discussions to Searle; empirically this is the observed output. | |||||||||||||||||
| ▲ | odo1242 3 hours ago | parent | next [-] | ||||||||||||||||
Claude Opus 4.6 is the best possible model to use in this test, with the least sycophancy. OpenAI and Gemini models are bad in comparison. | |||||||||||||||||
| |||||||||||||||||
| ▲ | jazzyjackson 4 hours ago | parent | prev | next [-] | ||||||||||||||||
If you have 10,000 people flipping coins over and over, one person will be experiencing a streak of heads, another a streak of tails. Which is to say, of a million people who just started playing with LLMs, a bunch of people will get hit or miss, while one guy is winning the neural net lottery and has the experience of the AI nailing every request, some poor bloke is trying to see what all the hype is about and cannot get one response that isn’t fully hallucinated garbage | |||||||||||||||||
| |||||||||||||||||
| ▲ | basilikum 4 hours ago | parent | prev | next [-] | ||||||||||||||||
Can you share your system prompt? | |||||||||||||||||
| |||||||||||||||||
| ▲ | dumpsterdiver 5 hours ago | parent | prev [-] | ||||||||||||||||
[dead] | |||||||||||||||||