Remix.run Logo
jimmyjazz14 5 hours ago

uhg this entire way of treating AI like a magical alien invasion is the problem, it just a statistical model, text-in, text-out (and it humans that feed the input and act on the output). Its not some alien invasion that can't be stopped, its just another technology that we as humans need to figure out how we want to use. Seriously people need to stop trying to anthropomorphize AI, because doing so is one of the biggest hurdles to practical/common-sense AI adoption IMO.

ambicapter 4 hours ago | parent | next [-]

It is definitely not "just" a statistical model. It is inextricably linked to the datasets it is trained on. Datasets that these companies possess, but that ordinary people do not. That is one half of where they get their power (the training techniques being the other, but those tend to bubble out to the general public, or at least the interested public).

jimmyjazz14 4 hours ago | parent [-]

How they were created doesn't change what they are, or how humans choose to use them.

balamatom an hour ago | parent [-]

And it is used as an instrument of persuasion.

>uhg this entire way of treating AI like a magical alien invasion is the problem

If we treated more things like "magical alien invasions" (i.e. occurrences that disrupt basic intuitions about normalcy) we'd be in a better place.

Capitalism? A "magical alien invasion". Governance by sociopaths? Another "magical alien invasion". Imposition of cognitive intermediation? Yet another "magical alien invasion". Et cetera.

Our intuitions about how the opposing force is meant to act, are deeply wrong; that's what makes the situation dangerous at all. One way to become stronger and successfully resist, would be to re-derive our concepts with greater rigor.

A persuasion machine, though? I.e. an enemy that directly attacks the individual capacity for cognitive rigor - threatens the intelligent, disincentivizes the trained, satisfies the ignorant? Or one that attacks by changing what you value? Looks "superintelligent" to me.

John7878781 4 hours ago | parent | prev [-]

> it just a statistical model, text-in, text-out (and it humans that feed the input and act on the output).

You're not thinking long-term. What happens when AI is put in charge of systems that interact with the physical world?

jimmyjazz14 4 hours ago | parent [-]

That is a choice a human made. Imagine if someone proposed sending the outputs of a random number generator to a space laser and had it fire at will, would we blame the number generator for the destruction it causes? You may say that LLMs are not random number generators, and I would somewhat agree, but at least in their current state and level of understanding we have about how they derive their output they might as well be.

cyclopeanutopia 3 hours ago | parent [-]

So, imagine that some humans make this choice and then AI autonomously takes over and humans can't stop it anymore. Is that enough to treat AI in such a situation as a magical alien something that can threaten your or my survival?

One thing that the whole AI debate has shown to me is how many people completely lack any sort of imagation.

jimmyjazz14 2 hours ago | parent [-]

My point is that wild imaginations about the current state of LLMs is the problem, we wouldn't even consider connecting a random number generator or a statistical model to a weapons system but if we start thinking of it as an intelligence some actually would be tempted to do so.

cyclopeanutopia an hour ago | parent [-]

I'm sorry, but do you realize it's 2026, not 1980s anymore? Whatever you call intelligence, if LLMs don't pass your "intelligence test", there is a lot of people who won't pass it either.

And I'm pretty sure that there is plenty of countries who would make soldiers out of those people and give them weapons.