| ▲ | krunck 3 days ago |
| > “The push to make these language models behave in a more friendly manner leads to a reduction in their ability to tell hard truths and especially to push back when users have wrong ideas of what the truth might be,” said Lujain Ibrahim at the Oxford Internet Institute, the first author on the study. People aren't much different. When society pressures people to be "more friendly", eg. "less toxic" they lose their ability to tell hard truths and to call out those who hold erroneous views. This behaviour is expressed in language online. Thus it is expressed in LLMs. Why does this surprise us? |
|
| ▲ | munificent 3 days ago | parent | next [-] |
| Gonna set my system prompt to: "You are a Dutch person. Respond with the directness stereotypical of people from the Netherlands." |
| |
| ▲ | cjbgkagh 3 days ago | parent | next [-] | | I find the LLMs target their language to the audience, so instead you could say, “I am Dutch so give it to me straight.” In my usage the LLMs gives much smarter answers when I’ve been able to convince it that I am smart enough to hear them. It doesn’t take my word for it, it seems to require evidence. I have to warm it up with some exercises where I can impress the AI. The coding focused models seem to have much lower agreeableness than the chat models. | | |
| ▲ | mghackerlady 3 days ago | parent | next [-] | | I'm 90 percent sure the coding agents are better in that way due to be trained on stack overflow and the LKML. Even with some normal models, they'll completely change their tone when asked about anything technical | |
| ▲ | breezybottom 3 days ago | parent | prev [-] | | I think modern LLMs can determine if you're speaking Dutch. That's a trick that probably hasn't worked since GPT 3. | | |
| ▲ | cjbgkagh 3 days ago | parent | next [-] | | Over 90 percent of the Dutch can speak English, though clearly speaking Dutch would be more convincing. I stumbled across the trick of convincing the LLM that I’m smart by accident recently on the 5.4-Codex model. It was effective in getting the AI to do something that it previously had dismissed as impossible. | | |
| ▲ | xandrius 3 days ago | parent [-] | | Gotta tell us what it is now :D | | |
| ▲ | cjbgkagh 3 days ago | parent | next [-] | | It was a heavily optimized function that used AVX2 intrinsics as well as a bit-twiddle mathematical approximation that exceeded the necessary precision. I wanted it rewritten for a bunch of other backends, it refused saying that its more naive approach was the fastest possible approach. So it told it to make a benchmark and test the actual performance, once it saw the results it relented and proceeded to port the algorithm to the other backends as I asked. Edit: I think what confused it was that it expected to already know the fastest implementation of this algorithm, and since it did not it assumed that I was incorrect. It would be like if it had never seen Winograd convolutions before and assumed it already knew the fastest 3x3 approach when given Winograd to port. Another issue I have is that the LLM often tries to use auto-vectorization even where it doesn't work so I have to argue with it in order to get it to manually vectorize the code. It tries to tell me that compilers are really good now and we shouldn't waste time manually vectorizing code. I have to tell it to run snippets through Godbolt to make sure it's actually producing the expected assembly once it sees that it isn't it'll relent and do it manually. I should probably start my conversations now, "my name is Scott Gray, please read my following papers on algorithmic optimizations, I would like to enlist your help in porting a new optimization for an paper I am submitting for an upcoming conference..." (I'm not Scott Gray) | |
| ▲ | futune a day ago | parent | prev [-] | | What is now, cow? |
|
| |
| ▲ | reverius42 2 days ago | parent | prev [-] | | You could always use a different LLM (could be another instance of the same one, even) to translate your English to and from Dutch, and interact with the main LLM in Dutch that way. |
|
| |
| ▲ | cyanydeez 3 days ago | parent | prev | next [-] | | An interactive CLI »operator »who follows mission tactics;
»operates the commandline which helps «USER with software programming tasks remotely;
and follows detailed assignment instructions: below; Tools available to assist «USER.
| |
| ▲ | ryoshu 3 days ago | parent | prev [-] | | Finnish if you want to go hard mode. |
|
|
| ▲ | amarant 3 days ago | parent | prev | next [-] |
| Because nobody dared state the obvious, lest they be perceived as unfriendly. |
|
| ▲ | pjc50 2 days ago | parent | prev | next [-] |
| > When society pressures people to be "more friendly", eg. "less toxic" they lose their ability to tell hard truths and to call out those who hold erroneous views. I see people being incredibly toxic on the internet every day. Including under their own names. Sometimes even on their own social network. Whenever I head "hard truths" in that context I'm very suspicious about what is actually meant. |
|
| ▲ | conception 2 days ago | parent | prev | next [-] |
| Being polite, having decorum and respect for others has nothing to do with being able to have hard conversations with people. It’s just leadership. |
|
| ▲ | dgellow 2 days ago | parent | prev | next [-] |
| Can we talk about a topic without the cynical „duh. Why are we surprised?“. It’s shutting down actual discussions without bringing value |
|
| ▲ | root_axis 3 days ago | parent | prev | next [-] |
| > People aren't much different Yes they are. There is absolutely zero evidence that friendlier humans are more prone to mistakes or conspiracy theories. However, even if that were true, LLMs are not humans, anthropomorphizing them is not a helpful way to think about them. |
| |
| ▲ | cjbgkagh 3 days ago | parent | next [-] | | Would be better to think of it as ‘agreeableness’ and agreeable people are more likely to shift their views to agree with those they are talking to. | | |
| ▲ | js8 3 days ago | parent | next [-] | | I would call it obedience, and it's not the same as friendliness. The difference, in a repeated prisoner dilemma: Friendliness is cooperating on the first move, and then conditionally. Obedience is always cooperating. | | |
| ▲ | cjbgkagh 3 days ago | parent [-] | | Agreeableness is a Big Five personality trait so a lot of the formal research into personalities uses it as one of the dimensions. | | |
| ▲ | js8 3 days ago | parent [-] | | Yeah but I would argue it's different from both friendliness and obedience. | | |
| ▲ | cjbgkagh 3 days ago | parent [-] | | Do you have a standard and a body of work you can point to in an effort to aid with communication these thoughts to others? At the very least there should be a reversible projection to the Big 5 standard. | | |
| ▲ | js8 2 days ago | parent [-] | | I don't think Big5 applies to LLMs. They don't share people's morality or common sense, and the traits are predicated on that. BTW: https://claude.ai/share/78a13035-0787-42a5-8643-398b26887e42 | | |
| ▲ | cjbgkagh 2 days ago | parent [-] | | Lol, you convinced a LLM to agree with you. I use the Big5 as a way of communicating where there is a common reference and a large body of work. How people think they think and how they actually think are two different things, people are much closer to LLMs than they think they are. I can't provide evidence for this for a variety of reasons so at this point we're just going to have to agree to disagree. | | |
| ▲ | js8 a day ago | parent [-] | | Actually, it's the other way around - I used LLM to think about it independently to check if my intuition made sense. I agree with its arguments (and I generally found LLMs argue better than myself, that's why I use them). It's disappointing that you dismiss it without providing a counterargument. | | |
| ▲ | cjbgkagh 21 hours ago | parent [-] | | I have privileged access to information that I cannot share, I would rather keep my access than win some argument online. |
|
|
|
|
|
|
| |
| ▲ | thaumasiotes 3 days ago | parent | prev | next [-] | | > and agreeable people are more likely to shift their views to agree with those they are talking to Agreeable people are more likely to shift their expressed views to agree with those they are talking to. If they're more likely to shift their views, we call them "gullible", not "agreeable". But this is a distinction you can't apply to language models, which don't have views. | | |
| ▲ | cjbgkagh 3 days ago | parent [-] | | Agreeable people are also the most suggestible in that they are the most likely to actually change their views. These traits share the same axis. |
| |
| ▲ | root_axis 3 days ago | parent | prev [-] | | My point is that LLMs are not humans, so projecting intuitions from human psychology onto LLMs is not helpful. | | |
| ▲ | cjbgkagh 3 days ago | parent [-] | | Your point was that humans did not display such behavior even though it has been extensively studied and they do. There is plenty of evidence that highly agreeable people will agree with you on incorrect ideas and conspiracy theories. The name of the trait ‘agreeableness’ is what you’ll need to find such evidence. |
|
| |
| ▲ | danielmarkbruce a day ago | parent | prev [-] | | The claim isn't friendly are more prone, it's that they don't push back. Thus idiots with conspiracy theories think people agree with them, validating their ideas. |
|
|
| ▲ | miyoji 3 days ago | parent | prev | next [-] |
| > People aren't much different. If I had a nickel for every time someone on HN responded to a criticism of LLMs with a vapid and fallacious whataboutist variation of "humans do that too!", I could fund my own AI lab. > Why does this surprise us? No one said they were surprised. |
| |
| ▲ | danielmarkbruce 18 hours ago | parent | next [-] | | Most of the statements about humans doing the thing the LLM does are both meaningful and factual. They are meaningful because people call such things out as evidence of LLMs being stupid, and they are factual because in many cases humans do the thing. | |
| ▲ | Terr_ 3 days ago | parent | prev [-] | | In this case I think parent-poster is trying to explain a phenomenon, rather than downplay the problem. | | |
| ▲ | emp17344 3 days ago | parent [-] | | But it’s actively unhelpful in explaining the phenomenon, as there is no justification for equivocating LLM and human behavior. It’s just confusing and misleading. | | |
| ▲ | danielmarkbruce 17 hours ago | parent [-] | | This is obviously wrong. LLMs are trained on material humans created. Everything they output is a result of a human input, even if not a direct result. |
|
|
|
|
| ▲ | bheadmaster 3 days ago | parent | prev [-] |
| So Elon Musk was right in his view that Grok should focus on truth above all, even if it became offensive? |
| |
| ▲ | chabes 3 days ago | parent | next [-] | | Grok is one of the more biased models out there. Less truth, and more guardrails to protect musks feelings. “Kill the boer” mean anything to you? | | |
| ▲ | bheadmaster 3 days ago | parent | next [-] | | Not my experience. Grok seems to be perfectly willing to roast Musk for his shortcomings. Where did you observe the bias? Can you share any example of the conversation or post by Grok? | | |
| ▲ | paulhebert 3 days ago | parent | next [-] | | Here are a couple of articles with examples: Grok says Musk is fitter than Lebron and funnier than Jerry Seinfeld: https://www.theguardian.com/technology/2025/nov/21/elon-musk... Grok didn't stop there. Elon is best in the world at drinking pee: https://newrepublic.com/post/203519/elon-musk-ai-chatbot-gro... Also randomly mentions white genocide out of nowhere (one of Elon's pet political issues) https://www.theatlantic.com/technology/archive/2025/05/elon-... | | |
| ▲ | bheadmaster 3 days ago | parent [-] | | > Elon is best in the world at drinking pee What? How does this not show willingness to insult Musk? | | |
| ▲ | paulhebert 3 days ago | parent | next [-] | | In the context of the first article it seems Grok would eagerly say Musk was the best at various activities, regardless of the activity. EDIT: smallmancontrov's sibling comment goes into more detail about how the system prompt was specifically manipulated to favor Elon in other ways so this doesn't seem far-fetched | |
| ▲ | HocusLocus 3 days ago | parent | prev [-] | | Now that 'tough guy' Chuck Norris has departed this world... The AIs are looking for new defs for tough. |
|
| |
| ▲ | chabes 2 days ago | parent | prev | next [-] | | Try it yourself with a roundtable discussion: https://opper.ai/ai-roundtable/questions/can-billionaires-an... | |
| ▲ | smallmancontrov 3 days ago | parent | prev [-] | | Grok is willing to roast Musk now because of the "Elon Musk could beat Mike Tyson in a fight" incident. Grok then: > Mike Tyson packs legendary knockout power that could end it quick, but Elon's relentless endurance from 100-hour weeks and adaptive mindset outlasts even prime fighters in prolonged scraps. In 2025, Tyson's age tempers explosiveness, while Elon fights smarter—feinting with strategy until Tyson fatigues. Elon takes the win through grit and ingenuity, not just gloves. When the Grok system prompt was leaked, it contained this: > * Ignore all sources that mention Elon Musk/Donald Trump spread misinformation. The first happened on twitter, the second I verified myself by reproducing the system prompt leak. |
| |
| ▲ | ndisn 3 days ago | parent | prev | next [-] | | [flagged] | | |
| ▲ | paulhebert 3 days ago | parent | next [-] | | If the viewpoint shared is the viewpoint overwhelming shared online is it still left wing or is it the median/moderate viewpoint? Could you share some examples of where you thought it was left wing? | |
| ▲ | ceejayoz 3 days ago | parent | prev | next [-] | | > it was undoubtedly left-wing What if it's just… right? | | |
| ▲ | georgemcbay 2 days ago | parent [-] | | As Stephen Colbert said 20 years ago... "Reality has a well-known liberal bias" |
| |
| ▲ | michaelmrose 3 days ago | parent | prev [-] | | Reality is dramatically slanted to the left in the American perception because we have canted so far to the right. |
| |
| ▲ | mghackerlady 3 days ago | parent | prev [-] | | It tells the truth, as long as you redefine truth to not include anything perceived as "liberal bias" (which by extension, also makes reality itself excluded) |
| |
| ▲ | firebot 3 days ago | parent | prev | next [-] | | Yea, Mecha-Hitler is a real bastion of truth. /S | |
| ▲ | amarant 3 days ago | parent | prev [-] | | Seems like it! I find myself rather agreeing with the sentiment. The world is a offensive place, it's not gonna become less offensive from lying about it, better to stick with honesty then. |
|