Remix.run Logo
WatchDog 7 hours ago

Overall it's worse than the other frontier models, but it's decent for queries about breaking news, due to being trained on twitter data. It's also better for queries about controversial topics, and topics that the other labs have deemed to be "unsafe".

Politically, it differs quite a bit from other models.[0] It's right leaning, although it's closer neutral than other models, defining what neutral is a challenge though.

[0]: https://arxiv.org/abs/2603.23841

seabass-labrax 5 hours ago | parent | next [-]

The study you link to doesn't take into consideration the Overton window of opinions. Perhaps there's some dimension along which you could say that one ideology lies 'opposite' to another political persuasion, but that doesn't necessarily mean that the two ideologies are equally acceptable to support in a given society.

I don't think calling defining neutral a 'challenge' does the question justice - neutral will always be context-dependent, and what may be in the center of the Overton window of one society may be unpopular or even highly illegal in a different society.

numpad0 2 hours ago | parent | prev | next [-]

Wasn't it just, likely, a Claude proxy, then a local LLM for a while, then now-ish an OpenRouter proxy?

bdangubic 5 hours ago | parent | prev | next [-]

> due to being trained on twitter data

twitter data is 70%+ bots (probably more than that now)

BurningFrog 5 hours ago | parent [-]

Grok is of course also trained on the same giant blob of "all human writing" that the other models are trained on.

BurningFrog 5 hours ago | parent | prev [-]

The stated goal for Grok is to be as truthful as possible.

Maybe that shows up as being more right leaning than the competition.

Natfan 4 hours ago | parent [-]

stated goal ≠ output

see: democratic people's republic of korea, the chinese communist party, american first