| ▲ | woodruffw 2 hours ago | |||||||||||||
> much more impervious to groupthink Can you explain what you mean by this? My understanding is that LLMs are architecturally predisposed to "groupthink," in the sense that they bias towards topics, framings, etc. that are represented more prominently in their training data. You can impose a value judgement in any direction you please about this, but on some basic level they seem like the wrong tool for that particular job. | ||||||||||||||
| ▲ | kelipso an hour ago | parent | next [-] | |||||||||||||
If it’s not trained to be biased towards Elon Musk is always right or whatever, I think it will be much less of a problem than humans. Humans are VERY political creatures. A hint that their side thinks X is true and humans will reorganize their entire philosophy and worldview retroactively to rationalize X. LLMs don’t have such instincts and can potentially be instructed to present or evaluate the primary, if opposing, arguments. So you architecturally predisposed argument, I don’t think is true. | ||||||||||||||
| ||||||||||||||
| ▲ | 3eb7988a1663 2 hours ago | parent | prev [-] | |||||||||||||
The LLM is also having a thumb put on its scale to ensure the output matches with the leader's beliefs. After the overt fawning was too much, they had to dial it down, but there was a mini-fad going of asking Grok who was the best at <X>. Turns out dear leader is best at everything[0] Some choices ones:
I have my doubts a Musk controlled encylopedia would have a neutral tone on such topics as: trans-rights, nazi salutes, Chinese EVs, whatever.[0] https://gizmodo.com/11-things-grok-says-elon-musk-does-bette... | ||||||||||||||