Remix.run Logo
triceratops 10 hours ago

> AI has limited real world experience or grasp of the consequences [of nuclear weapons]

I don't understand this argument. Almost no human has real world experience of the consequences of nuclear weapons. AI is working from the same sources of knowledge as the rest of us - text, audio, pictures, and video.

yndoendo 9 hours ago | parent | next [-]

AI has zero understanding of reality. It just regurgitates what it is told from training. There is no feedback loop to learn nor any consequence to the reasoned results.

Us human hallucinate, daily in fact. Example for people that have never had long hair.

1) Grow your hair long.

2) Your peripheral vision will start to be consumed by your hair.

3) Your hair will fall and sway causing your brain to think in flight / fight mode and you will turn your head to see.

4) Turning and looking causes feedback to acknowledge it was an hallucination.

5) Your brain now restricts the flight / fight mode because it was trained with continual feedback that it was just the wind blowing it or your head's juxtaposition that caused it.

Even though I told you about this and it is the first time growing your hair after, your brain still needs the real world experience to mitigate the hallucination.

AI has none of these abilities ...

jqpabc123 10 hours ago | parent | prev | next [-]

Almost no human has real world experience of the consequences of nuclear weapons.

Exactly!

Humans possess this amazing ability to understand and extrapolate beyond personal experience.

It's called "intelligence".

triceratops 8 hours ago | parent [-]

LLMs have shown the ability to do this. Not as much as the most capable humans. But still pretty good.

jqpabc123 8 hours ago | parent [-]

So "just nuke 'em" is pretty good for you?

triceratops 7 hours ago | parent [-]

No. That's why I'm asking where it comes from. The explanation that "LLMs don't have experience of nuclear war" isn't satisfying because nobody really has any experience of nuclear war.

jqpabc123 5 hours ago | parent [-]

Humans don't really need to experience nuclear war to comprehend the consequences and implications of it.

LLMs don't really comprehend much of anything. It just looks at what is in it's training database and tries to find similar questions or discussion in order to assemble a plausible sounding answer based on probability.

Not the sort of thing anyone should rely on for "critical" decision making.

triceratops 4 hours ago | parent [-]

> It just looks at what is in it's training database and tries to find similar questions or discussion

I feel like we're going around in circles here. So I'll try to explain one last time.

Most of the content about nuclear war in any LLM's training set is almost surely about how horrifying it is and how we must never engage in it. Because that's what humans usually say about nuclear war. The plausible sounding answer about nuclear war, based on probability, really should be "don't do it". So why isn't it?

jqpabc123 4 hours ago | parent [-]

So why isn't it?

Easy answer --- it only focused on "winning". It never bothered considering the consequences.

Similar lack of judgment is manifested by LLMs every day. It's working with memory and probability --- not to be confused with "intelligence".

black6 10 hours ago | parent | prev [-]

AI is not at all like real intelligence. Computers do not know what words mean because they do not experience the world as we do. They don't have the common sense or wisdom that people accumulate through the experience of life. Humans can understand the consequences of nuclear war. Computers can only predict the next best word in their response from a statistical map that has no connection to meatspace.

triceratops 8 hours ago | parent [-]

> Humans can understand the consequences of nuclear war

And I'm asking why. Nearly no human alive has experienced nuclear war. The nuclear taboo is strongly represented in any source an AI would have consumed. We know about the nuclear taboo because we've been told over and over.

> Computers can only predict the next best word in their response from a statistical map that has no connection to meatspace

This argument is at least 2 years old. The statistical map came from human experiences in meatspace. It wasn't generated randomly. It has at least some connection to the real world.

Just because how something works seems simple, doesn't mean what it does is simple.