Remix.run Logo
withinboredom 4 days ago

It can’t learn or think unless prompted, then it is given a very small slice of time to respond and then it stops. Forever. Any past conversations are never “thought” of again.

It has no intelligence. Intelligence implies thinking and it isn’t doing that. It’s not notifying you at 3am to say “oh hey, remember that thing we were talking about. I think I have a better solution!”

No. It isn’t thinking. It doesn’t understand.

0xCMP 4 days ago | parent | next [-]

Just because it's not independent and autonomous does not mean it could not be intelligent.

If existing humans minds could be stopped/started without damage, copied perfectly, and had their memory state modified at-will would that make us not intelligent?

dgfitz 4 days ago | parent [-]

> Just because it's not independent and autonomous does not mean it could not be intelligent.

So to rephrase: it’s not independent or autonomous. But it can still be intelligent. This is probably a good time to point out that trees are independent and autonomous. So we can conclude that LLMs are possibly as intelligent as trees. Super duper.

> If existing humans minds could be stopped/started without damage, copied perfectly, and had their memory state modified at-will would that make us not intelligent?

To rephrase: if you take something already agreed to as intelligent, and changed it, is it still intelligent? The answer is, no damn clue.

These are worse than weak arguments, there is no thesis.

hatthew 4 days ago | parent [-]

The thesis is that "intelligence" and "independence/autonomy" are independent concepts. Deciding whether LLMs have independence/autonomy does not help us decide if they are intelligent.

fluidcruft 4 days ago | parent | prev [-]

It sounds like you are saying the only difference is that human stimulus streams don't shut on and off?

If you were put into a medically induced coma, you probably shouldn't be consider intelligent either.

withinboredom 4 days ago | parent [-]

I think that’s a valid assessment of my argument, but it goes further than just “always on”. There’s an old book called On Intelligence that asked these kinds of questions 20+ years ago (of AI), I don’t remember the details, but a large part of what makes something intelligent doesn’t just boil down to what you know and how well you can articulate it.

For example, we as humans aren’t even present in the moment — different stimuli take different lengths of time to reach our brain, so our brain creates a synthesis of “now” that isn’t even real. You can’t even play Table Tennis unless you can predict up to one second in the future with enough details to be in the right place to hit the ball the ball before you hit the ball to your opponent.

Meanwhile, an AI will go off-script during code changes, without running it by the human. It should be able to easily predict the human is going to say “wtaf” when it doesn’t do what is asked, and handle that potential case BEFORE it’s an issue. That’s ultimately what makes something intelligent: the ability to predict the future, anticipate issues, and handle them.

No AI currently does this.