| ▲ | mkozlows 9 hours ago | |||||||
So there are two possibilities here: 1. There's no substantive change. Hegseth/Trump just wanted to punish Anthropic for standing up to them, even if it didn't get them anything else today -- establishing a chilling effect for the future has some value for them in this case, after all. And OpenAI was willing to help them do that, despite earlier claiming that they stood behind Anthropic's decisions. 2. There is a substantive change. Despite Altman's words, they have a tacit understanding that OpenAI won't really enforce those terms, or that they'll allow them to be modified some time in the future when attention has moved on elsewhere. Either way, it makes Altman look slimy, and OpenAI has aligned with Trump against Anthropic in a place where Anthropic made a correct principled stand. It's been clear for a while that Anthropic has more ethics than OpenAI, but this is more naked than any previous example. | ||||||||
| ▲ | slopinthebag 8 hours ago | parent [-] | |||||||
> OpenAI has aligned with Trump against Anthropic in a place where Anthropic made a correct principled stand. Just to be clear, you believe that the correct, principled stand is that it's OK to use their models for killing people and civilian surveillance? Both OAI and Anthropic have the same moral leg to stand on here, OAI is just not hypocritical about it. | ||||||||
| ||||||||