Remix.run Logo
xpe 3 days ago

Let’s get specific.

If we’re talking about DARPA’s research agenda or the US military’s priorities, I would say they are quite capable at planning for speculative scenarios and long-term effects - for various reasons, including decision making structure and funding.

If we’re talking about shifting people’s mindsets about AI risks and building a movement, the time is now. Luckily we’ve got foundations to build on. We don’t need to practice something else first. We have examples of trying to prime the public to pay attention to other long-term risks, such as global warming, pandemic readiness, and nuclear proliferation. Now we should to add long-term AI risk to the menu.

And I would not say that I’m anything close to “optimistic” in the probabilistic sense about building the coalition we need, but we must try anyway. And motivation can be found without naïve optimism. A sense of acting with purpose can be a useful state of mind that is not coupled to one’s guesses about most likely outcomes.

amanaplanacanal 3 days ago | parent [-]

It's hard.

Take global warming as an example: this is a real thing that's happening. We have measurements of CO2 concentrations and global temperatures. Most people accept that this is a real thing. And still getting anybody to do anything about it is nearly impossible.

Now you have a hypothetical risk of something that may happen sometime in the distant future, but may not. I don't see how you would be able to get anybody to care about that.

xpe 3 days ago | parent [-]

> And still getting anybody to do anything about it is nearly impossible.

Why exaggerate like this? Significant actions have been taken.

> I don't see how you would be able to get anybody to care about that.

Why exaggerate like this? Many people care.