Remix.run Logo
techbruv 4 days ago

I don’t understand the argument “AI is just XYZ mechanism, therefore it cannot be intelligent”.

Does the mechanism really disqualify it from intelligence if behaviorally, you cannot distinguish it from “real” intelligence?

I’m not saying that LLMs have certainly surpassed the “cannot distinguish from real intelligence” threshold, but saying there’s not even a little bit of intelligence in a system that can solve more complex math problems than I can seems like a stretch.

stickfigure 4 days ago | parent | next [-]

> if behaviorally, you cannot distinguish it from “real” intelligence?

Current LLMs are a long way from there.

You may think "sure seems like it passes the Turing test to me!" but they all fail if you carry on a conversation long enough. AIs need some equivalent of neuroplasticity and as of yet they do not have it.

PxldLtd 4 days ago | parent [-]

This is what I think is the next evolution of these models. Our brains are made up of many different types of neurones all interspersed with local regions made up of specific types. From my understanding most approaches to tensors don't integrate these different neuronal models at the node level; it's usually by feeding several disparate models data and combining an end result. Being able to reshape the underlying model and have varying tensor types that can migrate or have a lifetime seems exciting to me.

8note 4 days ago | parent | prev | next [-]

i dont see the need to focus on "intelligent" compared to "it can solve these problems well, and cant solve these other problems"

whats the benefit of calling something "intelligent" ?

hatthew 4 days ago | parent [-]

Strongly agree with this. When we were further from AGI, many people imagined that there is a single concept of AGI that would be obvious when we reached it. But now, we're close enough to AGI for most people to realize that we don't know where it is. Most people agree we're at least moving more towards it than away form it, but nobody knows where it is, and we're still too focused on finding it than making useful things.

lupusreal 4 days ago | parent | prev | next [-]

What it really boils down to is "the machine doesn't have a soul". Just an unfalsifiable and ultimately meaningless objection.

gitremote 4 days ago | parent | next [-]

Incorrect. Vertebrate animal brains update their neural connections when interacting with the environment. LLMs don't do that. Their model weights are frozen for every release.

pizza 4 days ago | parent | next [-]

But why can’t I then just say, actually, you need to relocate the analogy components; activations are their neural connections, the text is their environment, the weights are fixed just like our DNA is, etc.

lupusreal 2 days ago | parent | prev [-]

As I understand it, octopuses have their reasoning and intelligence essentially baked into them at birth, shaped by evolution, and do relatively little learning during life because their lives are so short. Very intelligent, obviously, but very unlike people.

skissane 4 days ago | parent | prev | next [-]

Maybe panpsychism is true and the machine actually does have a soul, because all machines have souls, even your lawnmower. But possibly the soul of a machine running a frontier AI is a bit closer to a human soul than your lawnmower’s soul is.

sfink 4 days ago | parent [-]

By that logic, Larry Ellison would have a soul. You've disproven panpsychism! Congratulations!

tonkinai 4 days ago | parent | prev [-]

Maybe the soul is not as mysterios as we think it is?

lupusreal 4 days ago | parent [-]

There is no empirical test for souls.

shakna 4 days ago | parent | prev | next [-]

Scientifically, intelligence requires organizational complexity. And has for about a hundred years.

That does actually disqualify some mechanisms from counting as intelligent, as the behaviour cannot reach that threshold.

We might change the definition - science adapts to the evidence, but right now there are major hurdles to overcome before such mechanisms can be considered intelligent.

Eisenstein 4 days ago | parent [-]

What is the scientific definition of intelligence? I assume that is it is comprehensive, internally consistent, and that it fits all of the things that are obviously intelligent and excludes the things which are obviously not intelligent. Of course being scientific I assume it is also falsifiable.

withinboredom 4 days ago | parent | prev [-]

It can’t learn or think unless prompted, then it is given a very small slice of time to respond and then it stops. Forever. Any past conversations are never “thought” of again.

It has no intelligence. Intelligence implies thinking and it isn’t doing that. It’s not notifying you at 3am to say “oh hey, remember that thing we were talking about. I think I have a better solution!”

No. It isn’t thinking. It doesn’t understand.

0xCMP 4 days ago | parent | next [-]

Just because it's not independent and autonomous does not mean it could not be intelligent.

If existing humans minds could be stopped/started without damage, copied perfectly, and had their memory state modified at-will would that make us not intelligent?

dgfitz 4 days ago | parent [-]

> Just because it's not independent and autonomous does not mean it could not be intelligent.

So to rephrase: it’s not independent or autonomous. But it can still be intelligent. This is probably a good time to point out that trees are independent and autonomous. So we can conclude that LLMs are possibly as intelligent as trees. Super duper.

> If existing humans minds could be stopped/started without damage, copied perfectly, and had their memory state modified at-will would that make us not intelligent?

To rephrase: if you take something already agreed to as intelligent, and changed it, is it still intelligent? The answer is, no damn clue.

These are worse than weak arguments, there is no thesis.

hatthew 4 days ago | parent [-]

The thesis is that "intelligence" and "independence/autonomy" are independent concepts. Deciding whether LLMs have independence/autonomy does not help us decide if they are intelligent.

fluidcruft 4 days ago | parent | prev [-]

It sounds like you are saying the only difference is that human stimulus streams don't shut on and off?

If you were put into a medically induced coma, you probably shouldn't be consider intelligent either.

withinboredom 4 days ago | parent [-]

I think that’s a valid assessment of my argument, but it goes further than just “always on”. There’s an old book called On Intelligence that asked these kinds of questions 20+ years ago (of AI), I don’t remember the details, but a large part of what makes something intelligent doesn’t just boil down to what you know and how well you can articulate it.

For example, we as humans aren’t even present in the moment — different stimuli take different lengths of time to reach our brain, so our brain creates a synthesis of “now” that isn’t even real. You can’t even play Table Tennis unless you can predict up to one second in the future with enough details to be in the right place to hit the ball the ball before you hit the ball to your opponent.

Meanwhile, an AI will go off-script during code changes, without running it by the human. It should be able to easily predict the human is going to say “wtaf” when it doesn’t do what is asked, and handle that potential case BEFORE it’s an issue. That’s ultimately what makes something intelligent: the ability to predict the future, anticipate issues, and handle them.

No AI currently does this.