▲ | jameshart 5 days ago | |||||||||||||||||||||||||
Well, you’d also be forgiven for thinking ‘how on earth can a social website chatbot be a white supremacist?’ And yet xAI managed to prove that is a legitimate concern. xAI has a shocking track record of poor decisions when it comes to training and prompting their AIs. If anyone can make a partisan coding assistant, they can. Indeed, given their leadership and past performance, we might expect them to explicitly try. | ||||||||||||||||||||||||||
▲ | simianwords 5 days ago | parent | next [-] | |||||||||||||||||||||||||
What’s their incentive to do this? What do they gain by making a partisan model instead of one that just works well? | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | dudeinjapan 5 days ago | parent | prev [-] | |||||||||||||||||||||||||
Perhaps you’ve never heard of Tay? Microsoft did pioneering work in the Nazi chatbot space. | ||||||||||||||||||||||||||
|