Remix.run Logo
tomaytotomato 13 hours ago

AI has always been dangerous, but not existentially dangerous.

Mythos is dangerous but it's not going to Skynet us.

Just the same as the military drone using some sort of OpenCV library and target prioritisation loop isn't going to turn evil on us.

estearum 13 hours ago | parent [-]

Yeah we have literally no examples of more intelligent beings accidentally or purposefully wiping out less intelligent beings. Any time such a scenario could have conceivably happened, the less intelligent beings were able to foresee the methods, mechanisms, and motivations of the more intelligent beings and were able to counteract it.

flir 13 hours ago | parent | next [-]

You have a lot of faith in the chatbots.

estearum 11 hours ago | parent [-]

No no. I think the "chatbots" will be effectively neutered as long as there's not a trillion dollar+ incentive to make the physical world highly malleable by text strings (e.g. by moving critical functions into information/code/data or by creating physical systems that are controllable by information/code/data).

repelsteeltje 13 hours ago | parent | prev | next [-]

I get the sarcasm, but what about Neanderthals versus Homo Sapiens?

estearum 12 hours ago | parent [-]

What about it?

miroljub 13 hours ago | parent | prev [-]

If we look at our human history, there are millions of examples where less intelligent beings destroyed highly advanced civilizations.

It was never about intelligence, but about willingness to destroy (willingness to defend is not enough). Babylon, Egypt, Persia, Greece, Rome, China, ... I won't mention current examples ...

estearum 11 hours ago | parent [-]

1. "Less advanced civilization" != less intelligent people

2. The outcome of near-peer competition is surely highly dependent on factors like brutality, luck, tactics etc... the competition between the defenders of crops (i.e. makers of pesticides) and insects is not. Not only are the insects destroyed en masse successfully, but neither side even recognizes itself as party to a competition. The insect has no conception of a crop, even when he walks in it, much less a pesticide, even when he tastes it. The pesticide sprayer assigns zero moral valence to his daily genocide.

Do you have a reason to believe the gap between AI (not LLMs specifically, but AI generally) and human intelligence will peak near the difference between human competitors (what... 20-30 IQ points)?

If so, please share why you believe this.

miroljub 11 hours ago | parent [-]

> Do you have a reason to believe the gap between AI (not LLMs specifically, but AI generally) and human intelligence will peak near the difference between human competitors (what... 20-30 IQ points)?

So we established that competing human civilizations differ by 20-30 IQ points? Sounds reasonable.

> If so, please share why you believe this.

Basically two reasons:

1. there's no AI. There are LLMs, which basically do pattern matching on increasingly LLM generated data set. That inevitable leads to a local maximum where every advance is increasingly difficult for decreasing gain in "intelligence"

2. the energy required to reach an ever increasing level of "intelligence" (or let us just call it pattern matching performance) quickly becomes so huge that it's simply not sustainable.

I think the current LLM approach is a dead-end bound to plateau not much higher than the current level.

I'm not saying it's impossible to reach AI, but it would require a paradigm shift that I'm not even able to imagine at this level of available technology.

estearum 10 hours ago | parent [-]

> there's no AI. There are LLMs

Obviously AI is physically possible, unless you think there's something universally special about the earthbound naked ape's brain-goo that imbues it with special intelligence-stuff.

> the energy required to reach an ever increasing level of "intelligence" (or let us just call it pattern matching performance) quickly becomes so huge that it's simply not sustainable.

Every single human being has an existence (dis)proof inside their skull

> I think the current LLM approach is a dead-end bound to plateau not much higher than the current level. I'm not saying it's impossible to reach AI, but it would require a paradigm shift that I'm not even able to imagine at this level of available technology.

Explicitly not relevant to the question I posed