| ▲ | zkmon 6 hours ago |
| History has shown that an alien invasion can only happen because of the internal competition and in-fighting of the natives. Colonial empires proved it only a few centuries back. The invading alien powers are fuelled by the inviting natives. AI (and computing technology in general) is an alien as it defies all wordly norms. It can have exact identical copies, can replicate, can exist everywhere, communicate across huge distance without time lapse, do huge work without time lapse, has no physical mass of it's own,, no respect for time, distance, mass and thinking work, not a living thing but can think.... Just the perfect alien creature qualities. Why are they allowed to invade Earth? The business goals, of course. To get a temporary edge over the competitors, until they acquire the same. But once everyone has the same Ai, there is no going back. Ai has established itself through the weak channels that are filled with greed, that can bribed by giving toys (business edge), in return to the keys to the dominance of human race. |
|
| ▲ | jimmyjazz14 5 hours ago | parent | next [-] |
| uhg this entire way of treating AI like a magical alien invasion is the problem, it just a statistical model, text-in, text-out (and it humans that feed the input and act on the output). Its not some alien invasion that can't be stopped, its just another technology that we as humans need to figure out how we want to use. Seriously people need to stop trying to anthropomorphize AI, because doing so is one of the biggest hurdles to practical/common-sense AI adoption IMO. |
| |
| ▲ | ambicapter 4 hours ago | parent | next [-] | | It is definitely not "just" a statistical model. It is inextricably linked to the datasets it is trained on. Datasets that these companies possess, but that ordinary people do not. That is one half of where they get their power (the training techniques being the other, but those tend to bubble out to the general public, or at least the interested public). | | |
| ▲ | jimmyjazz14 4 hours ago | parent [-] | | How they were created doesn't change what they are, or how humans choose to use them. | | |
| ▲ | balamatom 2 hours ago | parent [-] | | And it is used as an instrument of persuasion. >uhg this entire way of treating AI like a magical alien invasion is the problem If we treated more things like "magical alien invasions" (i.e. occurrences that disrupt basic intuitions about normalcy) we'd be in a better place. Capitalism? A "magical alien invasion". Governance by sociopaths? Another "magical alien invasion". Imposition of cognitive intermediation? Yet another "magical alien invasion". Et cetera. Our intuitions about how the opposing force is meant to act, are deeply wrong; that's what makes the situation dangerous at all. One way to become stronger and successfully resist, would be to re-derive our concepts with greater rigor. A persuasion machine, though? I.e. an enemy that directly attacks the individual capacity for cognitive rigor - threatens the intelligent, disincentivizes the trained, satisfies the ignorant? Or one that attacks by changing what you value? Looks "superintelligent" to me. |
|
| |
| ▲ | John7878781 4 hours ago | parent | prev [-] | | > it just a statistical model, text-in, text-out (and it humans that feed the input and act on the output). You're not thinking long-term. What happens when AI is put in charge of systems that interact with the physical world? | | |
| ▲ | jimmyjazz14 4 hours ago | parent [-] | | That is a choice a human made. Imagine if someone proposed sending the outputs of a random number generator to a space laser and had it fire at will, would we blame the number generator for the destruction it causes? You may say that LLMs are not random number generators, and I would somewhat agree, but at least in their current state and level of understanding we have about how they derive their output they might as well be. | | |
| ▲ | cyclopeanutopia 3 hours ago | parent [-] | | So, imagine that some humans make this choice and then AI autonomously takes over and humans can't stop it anymore. Is that enough to treat AI in such a situation as a magical alien something that can threaten your or my survival? One thing that the whole AI debate has shown to me is how many people completely lack any sort of imagation. | | |
| ▲ | jimmyjazz14 2 hours ago | parent [-] | | My point is that wild imaginations about the current state of LLMs is the problem, we wouldn't even consider connecting a random number generator or a statistical model to a weapons system but if we start thinking of it as an intelligence some actually would be tempted to do so. | | |
| ▲ | cyclopeanutopia an hour ago | parent [-] | | I'm sorry, but do you realize it's 2026, not 1980s anymore? Whatever you call intelligence, if LLMs don't pass your "intelligence test", there is a lot of people who won't pass it either. And I'm pretty sure that there is plenty of countries who would make soldiers out of those people and give them weapons. |
|
|
|
|
|
|
| ▲ | fwipsy 6 hours ago | parent | prev | next [-] |
| > internal competition and in-fighting of the natives. What about diseases which killed up to 95% of the population? I think you are basically correct, except for the historical analogy. |
| |
| ▲ | gherkinnn 5 hours ago | parent | next [-] | | The initial Spanish conquest of the Inca empire by 168! Spaniards was not a question of disease as much a war of succession the Incas fought amongst themselves that Pizarro knew to exploit. Throw in horses, steel, and gunpowder and you have a one-sided affair. | | |
| ▲ | fwipsy 5 hours ago | parent [-] | | Actually this is another good counterexample! As I recall, Incas lost battles against the Spaniards where they had something like 100x the numbers. It's true that they were initially divided, but they quickly united against the Spanish--and it didn't really help. The technological advantage was insurmountable. | | |
| ▲ | fwipsy 2 hours ago | parent | next [-] | | Turns out I misremembered. Incas never fully united, and even though Spaniards had a huge technological advantage in some battles, the war as a whole was more evenly matched. Technology, disease, and infighting ALL played a part in their victory. | |
| ▲ | SoftTalker 3 hours ago | parent | prev | next [-] | | > The technological advantage was insurmountable How's that playing out in the Middle East in 2026? | |
| ▲ | kjkjadksj 3 hours ago | parent | prev [-] | | How could it have been? It wasn’t like they had machine guns. In best case I believe it takes something like a full minute to reload a musket. Zerg rush would be sufficient tactics. 100 yard dash means your hoard of unarmed natives is through the musket range in maybe 10-15 seconds and pulling limbs off the spaniards already. Why this wasn’t done is I think the big mystery and lends credence to the idea of spaniards having significant force numbers through allies. | | |
| ▲ | fwipsy 3 hours ago | parent [-] | | Don't forget horses, armor, and steel weapons. It seems like Incan weapons had a lot of trouble penetrating Spanish armor, while the reverse was not true. Also, the Incas didn't just lack cavalry; they lacked the weapons and tactics to counter cavalry (such as pike formations.) That said, I was thinking of the Battle of Cajamarca, which was actually a Spanish ambush. 100x was probably overstating it; under other circumstances (e.g. rough terrain) Spanish technology had less of an edge. |
|
|
| |
| ▲ | RugnirViking 5 hours ago | parent | prev | next [-] | | This is not true of everywhere that was colonized. See Africa, or India. It would not be possible, even with very great tech advantage, to sustain millitary campaigns so far from europe without a safe port to base supplies etc, not to mention the manpower etc. These were very much made possible by what was essentially a standard playbook of allying with some natives against others, and using trade imbalance, violence, strongarming and other things to turn those "allies" into protectorates, and eventually colonies | | |
| ▲ | fwipsy 5 hours ago | parent [-] | | Right. I am not saying diseases were a factor in every conquest. Just refuting parent saying that conquest is "only possible" through infighting. It's not - overwhelming technological advantage or disease are also sufficient even against a united culture. | | |
| ▲ | ithkuil 4 hours ago | parent [-] | | Yeah. Basically conquest is possible when the victim is weakened. There are many ways to become weakened. Infighting and disease are common causes of weakening. |
|
| |
| ▲ | voakbasda 6 hours ago | parent | prev [-] | | Wait, you think AI won’t eventually have full control over a bio lab, where it can manipulate an unsuspecting tech to produce and release a bioweapon to accomplish that explicit goal? Because I think that seems virtually inevitable at this point. | | |
| ▲ | username223 5 hours ago | parent [-] | | Humans will give a slop machine control of a lab full of CRISPR machines because they think it might make them a dollar? It wouldn’t take Supreme Super Intelligence for that to go badly. | | |
| ▲ | voakbasda 5 hours ago | parent [-] | | They don’t have to hand over control to lose control to AI. People are easily manipulated, and AI has proven itself able to manipulate people. How long until a tech is tricked or coerced into doing something dumb on a planet scale, based on intentional misinformation given by its apparently benevolent AI assistant? | | |
| ▲ | username223 4 hours ago | parent [-] | | > benevolent AI assistant? “Volent” is the problem there. Whose fault is it that someone was tricked by a boy? |
|
|
|
|
|
| ▲ | Rekindle8090 37 minutes ago | parent | prev | next [-] |
| I get the point of your metaphor but its missing the forest for the trees |
|
| ▲ | mrob 6 hours ago | parent | prev | next [-] |
| >History has shown that an alien invasion can only happen because of the internal competition and in-fighting of the natives. Not true. Overwhelming technological advantage also works. As Hilaire Belloc put it: Whatever happens, we have got
The Maxim gun, and they have not.
The AI arms race is a race for that kind of advantage. Whoever wins (assuming they don't overshoot and trigger the "everybody dies" ending) becomes de-facto king of the world. Everybody else is livestock. |
| |
| ▲ | rogerrogerr 6 hours ago | parent [-] | | I used to think this, but the AI labs sure seem neck-and-neck in the model race. Doesn't appear that anyone is developing an enormous lead. So I've become skeptical of the runaway king-of-the-world-maker model scenario. The open models seeming to be ~6 months behind is very encouraging, too. | | |
| ▲ | mrob 6 hours ago | parent [-] | | AI progress can potentially be extremely non-linear because of feedback effects. The first to build an AI smart enough to accelerate building even smarter AIs wins (or loses along with everybody else if it's more successful than they expected). | | |
| ▲ | Analemma_ 5 hours ago | parent [-] | | People have said this, but so far if anything the opposite has been empirically true. OpenAI had a huge lead and it just didn't matter, Anthropic and Google both caught them and now they're neck and neck. It seems like compute overhang forecloses the possibility of runaway progress which eliminates all your competitors. | | |
| ▲ | mrob 5 hours ago | parent [-] | | Any feedback process has a hard threshold for instability. The PA system doesn't howl until the microphone is close enough to the loudspeaker. The atomic bomb doesn't explode until the fissile material reaches critical mass. If you don't know where the threshold is you can't extrapolate. Compute is a limiting factor now, but there have already been huge improvements in compute efficiency, e.g. mixture of experts. It seems extraordinarily unlikely that there are no more to be found. And compute capacity continues to increase too. |
|
|
|
|
|
| ▲ | djeastm 6 hours ago | parent | prev | next [-] |
| >The invading alien powers are fuelled by the inviting natives. And the massive amounts of people (software engineers, lawyers, doctors, etc) currently being paid as contractors to help train the next AI models. They're essentially the inviting natives who are being paid in trifles to tell them the secret ways of the natives farther inland. Sucking out all of the tribal knowledge of the industry like a vacuum. |
|
| ▲ | Nevermark 6 hours ago | parent | prev | next [-] |
| This would imply that evolution, which is also an arms race that disrupts and obsoletes the status quo, is due to some “weakness”. AI doesn’t actually come from the outside. The fact it’s economics have high winner-take-a-lot aspects, doesn’t mean you can eliminate the current winners and end up anywhere different, because it’s actually a natural decentralized progression of improving efficiency. So that framing makes no sense. However, the thesis for the potential for violence is sound. I don’t see a way out of that, given unending disruption, with no coordinated responsible response. I do not think is this essay is hype. This moment requires great leadership and competence, but that is not what is getting elected. The last two decades patience with massive businesses scaling up profitable conflicts of interest, and centralizing gatekeeper and dependency powers, that offer no recourse to any individuals they mistreat, strongly suggest we are incapable of dealing with AI fallout. Which will only accelerate and add to those trends. |
|
| ▲ | threethirtytwo 5 hours ago | parent | prev | next [-] |
| It reads like someone discovered analogies and decided they’re a substitute for thinking. The entire argument lives and dies on one move: calling AI an “alien.” And it’s not even consistent. It starts with “alien” as in foreign invader, then quietly upgrades it to “space alien,” and from that point on everything just inherits whatever sci fi trait sounds dramatic. That’s not reasoning, that’s a word doing a costume change and dragging the argument along with it. And honestly, the quality of comments on HN feels like it’s been tracking the broader decline in cognitive performance. The long running Flynn Effect has stalled or reversed in parts of the US. Some datasets show small but real drops in IQ related measures over the past decade. You read threads like this and it’s hard not to feel like you’re watching that play out in real time. |
|
| ▲ | tmpz22 6 hours ago | parent | prev [-] |
| > Ai has established itself through the weak channels that are filled with greed, That explains the prolific AI use as incompetent agencies like the DoJ, DOGE, and others under the current administration |