▲ | simianwords 5 days ago | |||||||
What was the alternative? This was clearly an oversight and this much was admitted. Your suggestion that an oversight like this is reason enough to not use the model? I don’t get the big problem over here. The model said some unsavoury things and the problem was admitted and fixed - why is this making people lose their minds? It has to be performative because I can’t explain it in any other way. | ||||||||
▲ | bhauer 5 days ago | parent | next [-] | |||||||
Yes, it is performative. As is most of the outrage in this thread. | ||||||||
| ||||||||
▲ | jameshart 5 days ago | parent | prev [-] | |||||||
That’s an uncharitable world view. ‘People who reach different conclusions to me based on the same events must be being dishonest’? From the outside, the Grok mechahitler incident appeared very much to be the embodiment of Musk’s top-down ‘free speech absolutist’ drive to strip ‘political correctness’ shackles from grok; the prompting changes were driven by his setting that direction. The issues became apparent very early that the prompt changes were leading to issues but reversion seemed to be something that X had to be pressured into - they were unwilling to treat it as a problem until the mechahitler thread. This all speaks to his having a particular vision for what he wants xAI agents to be – something which continues to be expressed in things like the ani product and other bot personas. The Microsoft ‘Tay’ incident was triggered through naivité. The Grok mechahitler incident seems to have been triggered through hubris and a delight in trolling. Those are very different motivations. | ||||||||
|