| ▲ | vhiremath4 4 hours ago | |
This is an interesting perspective. What happens if there is a large global war? Do researchers who were previously against working with the DoD end up flipping out of duty? Does the war budget go up? Does the DoD decide to lift any ban on Anthropic for the sake of getting the best model and does Anthropic warm its stance on not working with autonomous weapons systems? I don’t know the answers to these questions, but if the answer is “yes” to at least 1 or 2, then I think the equation flips quite a bit. This is what I’m seeing in the world right now, and it’s disconcerting: 1. Ukraine and Russia have been in a skirmish that has been drawn out much longer than I would guess most people would have guessed. This has created a divide in political allegiance within the United States and Europe. 2. We captured the leader of Venezuela. Cuba is now scared they are next. 3. We just bombed Iran and killed their supreme leader. 4. China and the US are, of course, in a massive economic race for world power supremacy. The tensions have been steadily rising, and they are now feeling the pressure of oil exports from Iran grinding to a halt. 5. The past couple days Macron has been trying to quell tension between Israel and Lebanon. I really do not hope we are not headed into war. I hope the fact that we all have nukes and rely on each others’ supply chains deters one. But man does it feel like the odds are increasing in favor of one, and man does that seem to throw a wrench in this whole thing with Anthropic vs. OpenAI. | ||