Remix.run Logo
abxyz 5 days ago

AI safety is focused on AGI but maybe it should be focused on how little “artificial intelligence” it takes to send people completely off the rails. We could barely handle social media, LLMs seem to be too much.

hirvi74 5 days ago | parent | next [-]

I think it's an canary in a coal mine, and the true writing is already on the wall. People that are using AI like in the post above us are likely not stupid people. I think those people truly want love and connection in their lives, and for some reason or another, they are unable to obtain such.

I have the utmost confidence that things are only going to get worse from here. The world is becoming more isolated and individualistic as time progresses.

JohnMakin 5 days ago | parent [-]

I can understand that. I’ve had long periods in my life where I’ve desired that - I’d argue probably I’m in one now. But it’s not real, it can’t possibly perform that function. It seems like it borders on some kind of delusion to use these tools for that.

TheOtherHobbes 5 days ago | parent [-]

It does, but it's more that the delusion is obvious, compared to other delusions that are equally delusional - like the ones about the importance of celebrities, soap opera plots, entertainment-adjacent dramas, and quite a lot of politics and economics.

Unlike those celebrities, you can have a conversation with it.

Which makes it the ultimate parasocial product - the other kind of Turing completeness.

MrGilbert 5 days ago | parent | prev [-]

It has ever been. People tend to see human-like behavior where there is non. Be it their pets, plants or… programs. The ELIZA-Effect.[1]

[1] https://en.wikipedia.org/wiki/ELIZA_effect

_heimdall 5 days ago | parent [-]

Isn't the ELIZA-Effect specific to computer programs?

Seeing human-like traits in pets or plants is a much trickier subject than seeing them in what is ultimate a machine developed entirely separately from the evolution of living organisms.

We simply don't know what its like to be a plant or a pet. We can't say they definitely have human-like traits, but we similarly can't rule it out. Some of the uncertainty is in the fact that we do share ancestors at some point, and our biology's aren't entirely distinct. The same isn't true when comparing humans and computer programs.

MrGilbert 5 days ago | parent | next [-]

Yes, it is - I realize that my wording is not very good. That was what I meant - the ELIZA-Effect explicitly applies to machine <> human interaction.

_heimdall 5 days ago | parent [-]

Got it, sorry I may have just misread your comment the first time!

tsimionescu 5 days ago | parent | prev [-]

The same vague arguments apply to computers. We know computers can reason, and reasoning is an important part of our intelligence and consciousness. So even for ELIZA, or even more so for LLMs, we can't entirely rule out that they may have aspects of consciousness.

You can also more or less apply the same thing to rocks, too, since we're all made up of the same elements ultimately - and maybe even empty space with its virtual particles is somewhat conscious. It's just a bad argument, regardless of where you apply it, not a complex insight.

pegasus 5 days ago | parent [-]

That's an instance of slippery slope fallacy at the end. Mammals share so much more evolutionary history with us than rocks that, yes, it justifies for example ascribing them an inner subjective world, even though we will never know how it is to be a cat from a cat's perspective. Sometimes quantitative accumulation does lead to qualitative jumps.

Also worth noting is that alongside the very human propensity to anthropomorphize, there's the equally human, but opposite tendency to deny animals those higher capacities we pride ourselves with. Basically a narcissistic impulse to set ourselves apart from our cousins we'd like to believe we've left completely behind. Witness the recurring surprise when we find yet another proof that things are not by far that cut-and-dry.