> It just looks at what is in it's training database and tries to find similar questions or discussion
I feel like we're going around in circles here. So I'll try to explain one last time.
Most of the content about nuclear war in any LLM's training set is almost surely about how horrifying it is and how we must never engage in it. Because that's what humans usually say about nuclear war. The plausible sounding answer about nuclear war, based on probability, really should be "don't do it". So why isn't it?