▲ | burnerRhodo 3 days ago | |
I mean... Us humans have done that countless times in history. Cannons, Tanks, Nukes, Space... One thing history proves is that if one side can develop a next gen weapon that you can't defend against, your civilization ceases to exist. Every single one of those advancements caused more problems than they solved. We don't need AGI for "AI" to be more dangerous. Just some kind of super trojan would do. | ||
▲ | mindslight 2 days ago | parent [-] | |
It seems like you're still assuming an AI-God, in the form of a supremely-powerful weapon. As it stands, the genAI capabilities of the US and China seem relatively on par. So it's not clear that incremental gains necessitate throwing everything possible at this one thing. |