Remix.run Logo
miroljub 13 hours ago

If we look at our human history, there are millions of examples where less intelligent beings destroyed highly advanced civilizations.

It was never about intelligence, but about willingness to destroy (willingness to defend is not enough). Babylon, Egypt, Persia, Greece, Rome, China, ... I won't mention current examples ...

estearum 11 hours ago | parent [-]

1. "Less advanced civilization" != less intelligent people

2. The outcome of near-peer competition is surely highly dependent on factors like brutality, luck, tactics etc... the competition between the defenders of crops (i.e. makers of pesticides) and insects is not. Not only are the insects destroyed en masse successfully, but neither side even recognizes itself as party to a competition. The insect has no conception of a crop, even when he walks in it, much less a pesticide, even when he tastes it. The pesticide sprayer assigns zero moral valence to his daily genocide.

Do you have a reason to believe the gap between AI (not LLMs specifically, but AI generally) and human intelligence will peak near the difference between human competitors (what... 20-30 IQ points)?

If so, please share why you believe this.

miroljub 11 hours ago | parent [-]

> Do you have a reason to believe the gap between AI (not LLMs specifically, but AI generally) and human intelligence will peak near the difference between human competitors (what... 20-30 IQ points)?

So we established that competing human civilizations differ by 20-30 IQ points? Sounds reasonable.

> If so, please share why you believe this.

Basically two reasons:

1. there's no AI. There are LLMs, which basically do pattern matching on increasingly LLM generated data set. That inevitable leads to a local maximum where every advance is increasingly difficult for decreasing gain in "intelligence"

2. the energy required to reach an ever increasing level of "intelligence" (or let us just call it pattern matching performance) quickly becomes so huge that it's simply not sustainable.

I think the current LLM approach is a dead-end bound to plateau not much higher than the current level.

I'm not saying it's impossible to reach AI, but it would require a paradigm shift that I'm not even able to imagine at this level of available technology.

estearum 10 hours ago | parent [-]

> there's no AI. There are LLMs

Obviously AI is physically possible, unless you think there's something universally special about the earthbound naked ape's brain-goo that imbues it with special intelligence-stuff.

> the energy required to reach an ever increasing level of "intelligence" (or let us just call it pattern matching performance) quickly becomes so huge that it's simply not sustainable.

Every single human being has an existence (dis)proof inside their skull

> I think the current LLM approach is a dead-end bound to plateau not much higher than the current level. I'm not saying it's impossible to reach AI, but it would require a paradigm shift that I'm not even able to imagine at this level of available technology.

Explicitly not relevant to the question I posed