Remix.run Logo
xpe 5 days ago

> what differentiates AI from other non physical efficiency tools?

At some point: (1) general intelligence; i.e. adaptivity; (2) self replication; (3) self improvement.

amanaplanacanal 4 days ago | parent | next [-]

We don't have any more idea how to get to 1, 2, or 3, than we did 50 years ago. LLMs are cool, but they seem unlikely to do any of those things.

xpe 4 days ago | parent [-]

I encourage everyone to not claim “X seems unlikely” when it comes to high impact risks. Such a thinking pattern often leads to pruning one’s decision tree way too soon. To do well, we need to plan over an uncertain future that has many weird and unfamiliar scenarios.

layer8 4 days ago | parent [-]

We already fail to plan for a lot of high-impact things that are exceedingly likely. Maybe we should tackle those first.

xpe 4 days ago | parent [-]

I am so tired of people acting like planning for an uncertain world is a zero sum game, decided by one central actor in a single pipeline execution model. I’ll unpack this below.

The argument above (or some version of it) gets repeated over and over, but it is deeply flawed for various reasons.

The argument implies that “we” is a single agent that must do some set of things before other things. In the real world, different collections of people can work on different projects simultaneously in various orderings.

This is very different than optimizing an instruction pipeline for a single core microprocessor. In the real world, different kinds of tasks operate on very different timescales.

As an example, think about how change happens in society. Should we only talk about one problem at a time? Of course not. Why? The pipeline to solving problems is long and uncertain so you have to parallelize. Raising awareness of an issue can be relatively slow. Do you know what is even slower? Trying to reframe an issue in a way that gets into people’s brains and language patterns. Once a conceptual model exists and people pay attention, then building a movement among “early adopters” has a fighting chance. If that goes well, political influence might follow.

layer8 4 days ago | parent [-]

I was more hinting at that if we fail to plan for the obvious stuff, what makes you think that we’ll be better at planning for the more obscure possibilities. The former should be much easier, but since we fail at it, we should first concentrate on getting better at that.

xpe 3 days ago | parent [-]

Let’s get specific.

If we’re talking about DARPA’s research agenda or the US military’s priorities, I would say they are quite capable at planning for speculative scenarios and long-term effects - for various reasons, including decision making structure and funding.

If we’re talking about shifting people’s mindsets about AI risks and building a movement, the time is now. Luckily we’ve got foundations to build on. We don’t need to practice something else first. We have examples of trying to prime the public to pay attention to other long-term risks, such as global warming, pandemic readiness, and nuclear proliferation. Now we should to add long-term AI risk to the menu.

And I would not say that I’m anything close to “optimistic” in the probabilistic sense about building the coalition we need, but we must try anyway. And motivation can be found without naïve optimism. A sense of acting with purpose can be a useful state of mind that is not coupled to one’s guesses about most likely outcomes.

amanaplanacanal 3 days ago | parent [-]

It's hard.

Take global warming as an example: this is a real thing that's happening. We have measurements of CO2 concentrations and global temperatures. Most people accept that this is a real thing. And still getting anybody to do anything about it is nearly impossible.

Now you have a hypothetical risk of something that may happen sometime in the distant future, but may not. I don't see how you would be able to get anybody to care about that.

xpe 3 days ago | parent [-]

> And still getting anybody to do anything about it is nearly impossible.

Why exaggerate like this? Significant actions have been taken.

> I don't see how you would be able to get anybody to care about that.

Why exaggerate like this? Many people care.

OneMorePerson 5 days ago | parent | prev [-]

Yeah I agree, it's not about where it's at now, but whether where we are now leads to something with general intelligence and self improvement ability. I don't quite see that happening with the curve it's on, but again what the heck do I know.

xpe 4 days ago | parent [-]

What do you mean about the curve not leading to general intelligence? Even if transformer architectures by themselves don’t get there, there are multifarious other techniques, including hybrids.

As long as (1) there are incentives for controlling ever increasing intelligence; (2) the laws of physics don’t block us; and (3) enough people/orgs have the motivation and means, some people/orgs are going to press forward. This just becomes a matter of time and probability. In general, I do not bet against human ingenuity, but I often bet against human wisdom.

In my view, along with many others, it would be smarter for the whole world to slow down AI capabilities advancement until we could have very high certainty that doing so is worth the risk.