Remix.run Logo
Retric 2 days ago

> Evaluating emotional context would be similar to a chess engine calculating its next move. There's nothing there that implies a conscience, sentience, morals, feelings, suffering or anything 'human'. It's just a necessary intermediate function to achieve its goal

If it’s limited to achieving goals it’s not AGI. Real time personal goal setting based on human equivalent emotions is an “intellectual task.” One of many requirements for AGI therefore is to A understand the world in real time and B emotionally respond to it. Aka AGI would by definition “necessitate having feelings.”

There’s philosophical arguments that there’s something inherently unique about humans here, but without some testable definition you could make the same argument that some arbitrary group of humans don’t have those qualities “gingers have no souls.” Or perhaps “dancing people have no consciousness” which seems like gibberish not because it’s a less defensible argument, but because you haven’t been exposed to it before.

AstroBen 2 days ago | parent [-]

I mean we just fundamentally have different definitions of AGI. Mine's based on outcomes and what it can do, so purely goal based. Not the processes that mimic humans or animals

I think this is the most likely first step of what would happen seeing as we're pushing for it to be created to solve real world problems

Retric a day ago | parent [-]

I’m not sure how you can argue something is a general intelligence if it can’t do those kinds of things? Comes out of the factory with a command: “Operate this android for a lifetime pretending to be human.”

Seems like arguing something is a self driving car if it needs a backup human driver for safety. It’s simply not what people who initially came up with the term meant and not what a plain language understanding of the term would suggest.

AstroBen a day ago | parent [-]

Because I see intelligence as the ability to produce effective actions towards a goal. A more intelligent chess AI beats a less intelligent one by making better moves towards the goal of winning the game

The G in AGI is being able to generalize that intelligence across domains, including those its never seen before, as a human could

So I would fully expect an advanced AGI to be able to pretend to be a human. It has a model of the world, knows how humans act, and could move the android in a human like manner, speak like a human, and learn the skills a human could

Is it conscious or feeling though? Or following the same processes that a human does? That's not necessary. Birds and planes both fly, but they're clearly different things. We (probably) don't need to simulate the brain to create this kind of intelligence

Lets pinch this AGI to test if it 'feels pain'

<Thinking>

Okay, I see that I have received a sharp pinch at 55,77,3 - the elbow region

My goal is to act like a human. In this situation a human would likely exhibit a pain response

A pain response for humans usually involves a facial expression and often a verbal acknowledgement

Humans normally respond quite slow, so I should wait 50ms to react

"Hey! Why did you do that? That hurt!"

...Is that thing human? I bet it'll convince most of the world it is.. and that's terrifying

Retric a day ago | parent [-]

> Is it conscious or feeling though?

You’re falling into the “Ginger’s don’t have souls” trap I just spoke of.

We don’t define humans as individuals components so your toe isn’t you, but by that same token your car isn’t you either. If some sub component of a system is emulating a human consciousness then we don’t need to talk about the larger system here.

AGI must be able to do these things, but it doesn’t need to have human mental architecture. Something that can simulate physics well enough could emulate an all the atomic scale interactions in a human brain for example. That virtual human brain would then experience everything we did even if the system running the simulation didn’t.