▲ | layer8 4 days ago | ||||||||||||||||
I was more hinting at that if we fail to plan for the obvious stuff, what makes you think that we’ll be better at planning for the more obscure possibilities. The former should be much easier, but since we fail at it, we should first concentrate on getting better at that. | |||||||||||||||||
▲ | xpe 3 days ago | parent [-] | ||||||||||||||||
Let’s get specific. If we’re talking about DARPA’s research agenda or the US military’s priorities, I would say they are quite capable at planning for speculative scenarios and long-term effects - for various reasons, including decision making structure and funding. If we’re talking about shifting people’s mindsets about AI risks and building a movement, the time is now. Luckily we’ve got foundations to build on. We don’t need to practice something else first. We have examples of trying to prime the public to pay attention to other long-term risks, such as global warming, pandemic readiness, and nuclear proliferation. Now we should to add long-term AI risk to the menu. And I would not say that I’m anything close to “optimistic” in the probabilistic sense about building the coalition we need, but we must try anyway. And motivation can be found without naïve optimism. A sense of acting with purpose can be a useful state of mind that is not coupled to one’s guesses about most likely outcomes. | |||||||||||||||||
|