Remix.run Logo
ivraatiems 3 days ago

> Unless we can find indications that humans can exceed the Turing computable - something we as of yet have no indication is even theoretically possible - there is no rational reason to think it can't.

But doesn't this rely on the same thing you suggest we don't have, which is a working and definable definition of consciousness?

I think a lot of the 'well, we can't define consciousness so we don't know what it is so it's worthless to think about' argument - not only from you but from others - is hiding the ball. The heuristic, human consideration of whether something is conscious is an okay approximation so long as we avoid the trap of 'well, it has natural language, so it must be conscious.'

There's a huge challenge in the way LLMs can seem like they are speaking out of intellect and not just pattern predicting, but there's very little meaningful argument that they are actually thinking in any way similarly to what you or I do in writing these comments. The fact that we don't have a perfect, rigorous definition, and tend to rely on 'I know it when I see it,' does not mean LLMs do have it or that it will be trivial to get to them.

All that is to say that when you say:

> I also don't know for sure whether or not you are "possessed of subjective experience" as I can't measure it.

"Knowing for sure" is not required. A reasonable suspicion one way or the other based on experience is a good place to start. I also identified two specific things LLMs don't do - they are not self-motivated or goal-directed without prompting, and there is no evidence they possess a sense of self, even with the challenge of lack of definition that we face.

nearbuy 3 days ago | parent | next [-]

> But doesn't this rely on the same thing you suggest we don't have, which is a working and definable definition of consciousness?

No, it's like saying we have no indication that humans have psychic powers and can levitate objects with their minds. The commenter is saying no human has ever demonstrated the ability to figure things out that aren't Turing computable and we have no reason to suspect this ability is even theoretically possible (for anything, human or otherwise).

vidarh 3 days ago | parent | prev [-]

No, it rests on computability, Turing equivalence, and the total absence of both any kind of evidence to suggest we can exceed the Turing computable, and the lack of even a theoretical framework for what that would mean.

Without that any limitations borne out of what LLMs don't currently do are irrelevant.

ivraatiems 3 days ago | parent [-]

That doesn't seem right to me. If I understand it right, your logic is:

1. Humans intellect is Turing computable. 2. LLMs are based on Turing-complete technology. 3. Therefore, LLMs can eventually equal human intellect.

But if that is the right chain of assumptions, there's lots of issues with it. First, whether LLMs are Turing complete is a topic of debate. There are points for[0] and against[1].

I suspect they probably _are_, but that doesn't mean LLMs are tautologically indistinguishable from human intelligence. Every computer that uses a Turing-complete programming language can theoretically solve any Turing-computable problem. That does not mean they will ever be able to efficiently or effectively do so in real time under real constraints, or that they are doing so now in a reasonable amount real-world time using extant amounts of real-world computing power.

The processor I'm using to write this might be able to perform all the computations needed for human intellect, but even if it could, that doesn't mean it can do it quickly enough to compute even a single nanosecond of actual human thought before the heat-death of the universe, or even the end of this century.

So when you say:

> Without that any limitations borne out of what LLMs don't currently do are irrelevant.

It seems to me exactly the opposite is true. If we want technology that is anything approaching human intelligence, we need to find approaches which will solve for a number of things LLMs don't currently do. The fact that we don't know exactly what those things are yet is not evidence that those things don't exist. Not only do they likely exist, but the more time we spend simply scaling LLMs instead of trying to find them, the farther we are from any sort of genuine general intelligence.

[0] https://arxiv.org/abs/2411.01992 [1] https://medium.com/heyjobs-tech/turing-completeness-of-llms-...