Remix.run Logo
gf000 3 days ago

Well, unless you believe in some spiritual, non-physical aspect of consciousness, we could probably agree that human intelligence is Turing-complete (with a slightly sloppy use of terms).

So any other Turing-complete model can emulate it, including a computer. We can even randomly generate Turing machines, as they are just data. Now imagine we are extremely lucky and happen to end up with a super-intelligent program which through the mediums it can communicate (it could be simply text-based but a 2D video with audio is no different for my perspective) can't be differentiated from a human being.

Would you consider it sentient?

Now replace the random generation with, say, a back propagation algorithm. If it's sufficiently large, don't you think it's indifferent from the former case - that is, novel qualities could emerge?

With that said, I don't think that current LLMs are anywhere close to this category, but I just don't think this your reasoning is sound.

DanHulton 3 days ago | parent | next [-]

> we could probably agree that human intelligence is Turing-complete (with a slightly sloppy use of terms). > So any other Turing-complete model can emulate it

You're going off the rails IMMEDIATELY in your logic.

Sure, one Turing-complete computer language can have its logic "emulated" by another, fine. But human intelligence is not a computer language -- you're mixing up the terms "Turing complete" and "Turing test".

It's like mixing up the terms "Strawberry jam" and "traffic jam" and then going on to talk about how cars taste on toast. It's nonsensical.

gf000 3 days ago | parent [-]

Game of life, PowerPoint, and a bunch of non-PL stuff are all Turing-complete. I don't mix up terms, I did use a slightly sloppy terminology but it is the correct concept - and my point is that we don't know of a computational model that can't be expressed by a Turing-machine, humans are a physical "machine", ergo we must also fall into that category.

Give my comment another read, but it was quite understandable from context. (Also, you may want to give a read to the Turing paper because being executable by a person as well was an important concept within)

DanHulton 2 days ago | parent | next [-]

Again, you're going wildly off the rails in your logic. Sure, "executable by a human" is part of the definition for Turing machines, but that's only talking about Turing-specific capabilities. If you want to argue that a Turing machine can emulate the specific definition of Turing machine capabilities that humans can perform, that's fine. But you're saying that because humans can ACT LIKE Turing machines, they must BE Turing machines, and are therefore emulatable.

This is the equivalent of saying "I have set up a complex mechanical computer powered by water that is Turing complete. Since any Turing complete system can emulate another one, it means that any other Turing complete system can also make things wet and irrigate farms.

Human intelligence is not understood. It can be made to do Turing complete things, but you can't invert that and say that because you've read the paper on Turing completeness, you now understand human intelligence.

coopierez 2 days ago | parent | prev [-]

But humans can do things Turing machines cannot. Such as eating a sandwich.

gf000 2 days ago | parent [-]

That's not a computation, it's a side effect. It just depends on what you wire your "computer" up to. A Turing machine in itself is just a (potentially non-returning) mathematical function, but you are free to map any input/output to it.

Actually, the way LLMs are extended with tools is a pretty much the same (an LLM itself has no access to the internet, but if it returns some specific symbols, the external "glue" will do a search and then the LLM is free to use the results)

almosthere 3 days ago | parent | prev | next [-]

We used to say "if you put a million monkeys on typewriters you would eventually get shakespear" and no one would ever say that anymore, because now we can literally write shakespear with an LLM.

And the monkey strategy has been 100% dismissed as shit..

We know how to deploy monkeys on typewriters, but we don't know what they'll type.

We know how to deploy transformers to train and inference a model, but we don't know what they'll type.

We DON'T know how a thinking human (or animal) brain works..

Do you see the difference.

nearbuy 3 days ago | parent | next [-]

The monkeys on typewriters saying is just a colorful way of saying that an infinite random sequence will contain all finite sequences somewhere within it. Which is true. But I don't see what infinite random sequences have to do with LLMs or human thinking.

> Do you see the difference

No? I'm not sure what you're getting at.

procaryote 3 days ago | parent | prev | next [-]

To be fair, we also trained the LLM on (among other things) shakespeare, and adjusted the weights so that generating shakespeare would be more likely after that training.

We don't claim a JPEG can paint great art, even though certain jpegs do.

almosthere 3 days ago | parent [-]

So, more proof it's not thinking, right? It can only regurgitate a large if/else superstructure with some jumping around.

procaryote 2 days ago | parent [-]

Who truly knows if you can make an if-else + randomness structure big enough to become smart?

But yes, we built a machine that generates text similar to what we built it from, and now we're looking at it generating text and are all impressed.

KoolKat23 3 days ago | parent | prev [-]

I was going to use this analogy in the exact opposite way. We do have a very good understanding of how the human brain works. Saying we don't understand how the brain works is like saying we don't understand how the weather works.

If you put a million monkeys on typewriters you would eventually get shakespeare is exactly why LLM's will succeed and why humans have succeeded. If this weren't the case why didn't humans 30000 years ago create spacecraft if we were endowed with the same natural "gift".

almosthere 3 days ago | parent [-]

Yeah no, show me one scientific paper that says we know how the brain works. And not a single neuron because that does absolute shit towards understanding thinking.

KoolKat23 3 days ago | parent [-]

This is exactly why I mentioned the weather.

A scientific paper has to be verifiable, you should be able to recreate the experiment and come to the same conclusion. It's very very difficult to do with brains with trillions of parameters and that can't be controlled to the neuron level. Nothwithstanding the ethical issues.

We don't have a world weather simulator that is 100% accurate either given the complex interplay and inability to control the variables i.e. it's not verifiable. It'd be a bit silly to say we don't know why it's going to rain at my house tomorrow.

Until then it is a hypothesis, and we can't say we know even if the overwhelming evidence indicates that in fact that we do know.

myrmidon 3 days ago | parent | prev | next [-]

> Would you consider it sentient?

Absolutely.

If you simulated a human brain by the atom, would you think the resulting construct would NOT be? What would be missing?

I think consciousness is simply an emergent property of our nervous system, but in order to express itself "language" is obviously needed and thus requires lots of complexity (more than what we typically see in animals or computer systems until recently).

prmph 3 days ago | parent [-]

> If you simulated a human brain by the atom,

That is what we don't know is possible. You don't even know what physics or particles are as yet undiscovered. And from what we even know currently, atoms are too coarse to form the basis of such "cloning"

And, my viewpoint is that, even if this were possible, just because you simulated a brain atom by atom, does not mean you have a consciousness. If it is the arrangement of matter that gives rise to consciousness, then would that new consciousness be the same person or not?

If you have a basis for answering that question, let's hear it.

myrmidon 3 days ago | parent | next [-]

> You don't even know what physics or particles are as yet undiscovered

You would not need the simulation to be perfect; there is ample evidence that our brains a quite robust against disturbances.

> just because you simulated a brain atom by atom, does not mean you have a consciousness.

If you don't want that to be true, you need some kind of magic, that makes the simulation behave differently from reality.

How would a simulation of your brain react to an question that you would answer "consciously"? If it gives the same responds to the same inputs, how could you argue it isnt't conscious?

> If it is the arrangement of matter that gives rise to consciousness, then would that new consciousness be the same person or not?

The simulated consciousness would be a different one from the original; both could exist at the same time and would be expected to diverge. But their reactions/internal state/thoughts could be matched at least for an instant, and be very similar for potentially much longer.

I think this is just Occams razor applied to our minds: There is no evidence whatsoever that our thinking is linked to anything outside of our brains, or outside the realm of physics.

prmph 3 days ago | parent | next [-]

> "quite robust against disturbances."

does not mean that the essential thing gives rise to consciousness is only approximate. To give an example from software, you can write software is robust against bad input, attempts to crash it, even bit flips. But, if I came in and just changed a single character in the source code, that may cause it to fail compilation, fail to run, or become quite buggy.

> If you don't want that to be true, you need some kind of magic,

This is just what I'm saying is a false dichotomy. The only reason some are unable to see beyond it is that we think the basic logic we understand are all there could be.

In this respect physics has been very helpful, because without peering into reality, we would have kept deluding ourselves that pure reason was enough to understand the world.

It's like trying to explain quantum mechanics to a well educated person or scientist from the 16th century without the benefit of experimental evidence. No way they'd believe you. In fact, they'd accuse you of violating basic logic.

myrmidon 3 days ago | parent [-]

How is it a false dichotomy? If you want consciousness to NOT be simulateable, then you need some essential component to our minds that can't be simulated (call it soul or whatever) and for that thing to interface with our physical bodies (obviously).

We have zero evidence for either.

> does not mean that the essential thing gives rise to consciousness is only approximate

But we have 8 billion different instances that are presumably conscious; plenty of them have all kinds of defects, and the whole architecture has been derived by a completely mechanical process free of any understanding (=> evolution/selection).

On the other hand, there is zero evidence of consciousness continuing/running before or after our physical brains are operational.

prmph 3 days ago | parent [-]

> plenty of them have all kinds of defects,

Defects that have not rendered them unconscious, as long as they still are alive. You seem not to see the circularity of your argument.

I gave you an example to show that robustness against adverse conditions is NOT the same as internal resiliency. Those defect, as far as we know, are not affecting the origin of consciousness itself. Which is my point.

> How is it a false dichotomy? If you want consciousness to NOT be simulateable, then you need some essential component to our minds that can't be simulated (call it soul or whatever) and for that thing to interface with our physical bodies (obviously).

If you need two things to happen at the same time in sync with each other no matter if they are separated by billions of miles, then you need faster-than-light travel, or some magic [1]; see what I did there?

1. I.e., quantum entanglement

myrmidon 3 days ago | parent | next [-]

> If you need two things to happen at the same time in sync with each other no matter if they are separated by billions of miles, then you need faster-than-light travel, or some magic [1]; see what I did there?

No. Because even if you had solid evidence for the hypothesis that quantum mechanical effects are indispensable in making our brains work (which we don't), then that is still not preventing simulation. You need some uncomputable component, which physics right now neither provides nor predicts.

And fleeing into "we don't know 100% of physics yet" is a bad hypothesis, because we can make very accurate physical predictions already-- you would need our brains to "amplify" some very small gap in our physical understanding, and this does not match with how "robust" the operation of our brain is-- amplifiers, by their very nature, are highly sensitive to disruption or disturbances, but a human can stay conscious even with a particle accelerator firing through his brain.

tsimionescu 3 days ago | parent | prev [-]

> If you need two things to happen at the same time in sync with each other no matter if they are separated by billions of miles, then you need faster-than-light travel, or some magic [1]

This makes no sense as written - by definition, there is no concept of "at the same time" for events that are spacelike separated like this. Quantum entanglement allows you to know something about the statistical outcomes of experiments that are carried over a long distance away from you, but that's about it (there's a simpler version, where you can know some facts for certain, but that one actually looks just like classical correlation, so it's not that interesting on its own).

I do get the point that we don't know what we don't know, so that a radical new form of physics, as alien to current physics as quantum entanglement is to classical physics, could exist. But this is an anti-scientific position to take. There's nothing about consciousness that breaks any known law of physics today, so the only logical position is to suppose that consciousness is explainable by current physics. We can't go around positing unknown new physics behind every phenomenon we haven't entirely characterized and understood yet.

prmph 2 days ago | parent [-]

> There's nothing about consciousness that breaks any known law of physics today, so the only logical position is to suppose that consciousness is explainable by current physics

Quite the claim to make

tsimionescu 2 days ago | parent [-]

Is it? It's quite uncontroversial I think that consciousness has no special impact in physics, there's no physical experiment that is affected by a consciousness being present or not. Electrons don't behave differently if a human is looking at them versus a machine, as far as any current physical experiment has ever found.

If we agree on this, then it follows logically that we don't need new physics to explain consciousnesses. I'm not claiming it's impossible that consciousness is created by physics we don't yet know - just claiming that it's also not impossible that it's not. Similarly, we don't fully understand the pancreas, and it could be that the pancreas works in a way that isn't fully explainable by current physics - but there's currently no reason to believe that, so we shouldn't assume that.

prmph a day ago | parent [-]

> It's quite uncontroversial I think that consciousness has no special impact in physics, there's no physical experiment that is affected by a consciousness being present or not. Electrons don't behave differently if a human is looking at them versus a machine, as far as any current physical experiment has ever found.

Way to totally miss the point. We can't detect or measure consciousness, so therefore there is nothing to explain. /s Like an LLM that deletes or emasculates tests it is unable to make pass.

I know I am conscious, I also know that the stone in my hand is not. I want to understand why. It is probably the most unexplainable thing. It does not mean we ignore it. If you want to dispute that my consciousness has no physical import nor consequence, well, then we will have to agree to disagree.

tsimionescu a day ago | parent [-]

My point is this: find a physical experiment that can't be entirely explained by the physical signs of consciousness (e.g. electrochemical signals in the brain). As long as none can be found, there is no reason to believe that new physics is required to explain consciousness - my own or yours.

uwagar 3 days ago | parent | prev [-]

dude u need to do some psychedelics.

gf000 3 days ago | parent | prev | next [-]

Well, if you were to magically make an exact replica of a person, wouldn't it be conscious and at time 0 be the same person?

But later on, he would get different experiences and become a different person no longer identical to the first.

In extension, I would argue that magically "translating" a person to another medium (e.g. a chip) would still make for the same person, initially.

Though the word "magic" does a lot of work here.

prmph 3 days ago | parent [-]

I'm not talking about "identical" consciousnesses. I mean the same consciousness. The same consciousness cannot split into two, can it?

Either it is (and continues to be) the same consciousness, or it is not. If it were the same consciousness, then you would have a person who exists in two places at once.

tsimionescu 3 days ago | parent | next [-]

Well, "the same consciousness" it's not, as for example it occupies a different position in spacetime. It's an identical copy for a split second, and then they start diverging. Nothing so deep about any of this. When I copy a file from one disk to another, it's not the same file, they're identical copies for some time (usually, assuming no defects in the copying process), and will likely start diverging afterwards.

pka 2 days ago | parent [-]

It might be deeper than you think.

Qualia exist "outside" spacetime, e.g. redness doesn't have a position in spacetime. If consciousness is purely physical, then how can two identical systems (identical brains with identical sensory input) giving rise by definition to the same qualia not literally be the same consciousness?

tsimionescu 2 days ago | parent [-]

> Qualia exist "outside" spacetime, e.g. redness doesn't have a position in spacetime.

I'm sensing redness here and now, so the sensation of redness exists very clearly tied to a particular point in spacetime. In what sense is the qualia of redness not firmly anchored in spacetime? Of course, you could talk about the concept redness, like the concept Pi, but even then, these concepts exist in the mind of a human thinking about them, still tied to a particular location in spacetime.

> If consciousness is purely physical, then how can two identical systems (identical brains with identical sensory input) giving rise by definition to the same qualia not literally be the same consciousness?

The two brains don't receive the same sensory inputs, nothing in the experiment says they do. From the second right after the duplicate is created, their sensory inputs diverge, and so they become separate consciousnesses with the same history. They are interchangeable initially, if you gave the same sensory inputs to either of them, they would have the same output (even internally). But, they are not identical: giving some sensory input to one of them will not create any effect directly in the other one.

pka a day ago | parent [-]

> I'm sensing redness here and now, so the sensation of redness exists very clearly tied to a particular point in spacetime. In what sense is the qualia of redness not firmly anchored in spacetime? Of course, you could talk about the concept redness, like the concept Pi, but even then, these concepts exist in the mind of a human thinking about them, still tied to a particular location in spacetime.

But qualia are inherently subjective. You can correlate brain activity (which exists at a position in spacetime) to subjective experience, but that experience is not related to spacetime.

Said otherwise: imagine you are in the Matrix at a coffee shop and sense redness, but your brain is actually in a vat somewhere being fed fake sensory input. "Where" is the redness? You would clearly say that it arises in your brain in the coffee shop. Imagine then the vat is moved (so its position in spacetime changes), your brain is rolled back to its previous state, and then fed the same sensory input again. Where is the redness now?

You can't differentiate the two sensations of redness based on the actual position of the brain in spacetime. For all intents and purposes, they are the same. Qualia only depend on the internal brain state at a point in time and on the sensory input. Spacetime is nowhere to be found in that equation.

> The two brains don't receive the same sensory inputs

But let's say they do. Identical brains, identical inputs = identical qualia. What differentiates both consciousnesses?

tsimionescu a day ago | parent [-]

> But let's say they do. Identical brains, identical inputs = identical qualia. What differentiates both consciousnesses?

I'll start with this, because it should help with the other item. We know there are two identical consciousnesses exactly because they are separate in spacetime. That is, while I can send the same input to both and get the same mind, that's not the interesting thing. The interesting thing is that I also can send different inputs, and then I'll get different minds. If it really were a single consciousness, that would be impossible. For example, you can't feed me both pure redness and pure greenness at the same time, so I am a single consciousness.

Here is where we get back to the first item: if we accepted that qualia are not localized in spacetime, we'd have to accept that there is no difference between me experiencing redness and you experiencing redness. Even if you consider that your qualia are separate from my own because of our different contexts, that still doesn't fully help: perhaps two different beings on two different planets happen to lead identical lives up to some point when a meteorite hits one of the planets and gravely injures one of their bodies. Would you say that there was a single consciousness that both bodies shared, but that it suddenly split once the meteorite hit?

Now, that is a valid position to take, in some sense. But then that means that consciousness is not continuous in any way, in your view. The day the meteorite hit planet A is not special in any way for planet B. So, if the single consciousness that planet A and planet B shared stopped that day, only to give rise to two different consciousnesses, that means that this same phenomenon must happen every day, and in fact at every instant of time. So, we now must accept that any feeling of time passing must be a pure illusion, since my consciousness now is a completely different consciousness than then one that experienced the previous minute. While this is a self-consistent definition, it's much more alien than the alternative - where we would accept that consciousness is tied in spacetime to its substrate.

pka 20 hours ago | parent [-]

> Would you say that there was a single consciousness that both bodies shared, but that it suddenly split once the meteorite hit?

I agree, this is super weird. In a sense this seems to be the difference between viewing consciousness from the first person vs the third person. But until we understand how (if at all) matter generates felt experience the latter view can not, by definition, be about consciousness itself.

I guess this kind of perspective commits one to viewing first person experience in the way we understand abstract concepts - it is nonsensical to ask what the difference between this "1" here and that other "1" over there is. Well, you can say, they are at different positions and written in different materials etc, but those are not properties of the concept "1" anymore.

So yes, coming back to the thought experiment, one of the consequences of that would have to be that both bodies share the same consciousness and the moment something diverges the consciousnesses do too.

The point about time is interesting, and also directly related to AI. If at some point machines become conscious (leaving aside the question if that's possible at all and how we would know without solving the aforementioned hard problem), they would presumably have to generate quanta at discrete steps. But is that so strange? The nothingness in between would not be felt (kind of like going to sleep and waking up "the next moment").

But maybe this idea can be applied to dynamical continuous systems as well, like the brain.

(Btw this conversation was super interesting, thank you!)

gf000 3 days ago | parent | prev [-]

Consciousness has no agreed upon definition to begin with, but I like to think of it as to what a whirlwind is to a bunch of air molecules (that is, an example of emergent behavior)

So your question is, are two whirlwinds with identical properties (same speed, same direction, shape etc) the same in one box of air, vs another identical box?

prmph 3 days ago | parent [-]

Exactly, I guess this starts to get into philosophical questions around identity real quick.

To me, two such whirlwinds are identical but not the same. They are the same only if they are guaranteed to have the same value for every conceivable property, forever, and even this condition may not be enough.

quantum_state 3 days ago | parent | prev [-]

At some point, quantum effects will need to be accounted for. The no cloning theorem will make it hard to replicate the quantum state of the brain.

prmph 3 days ago | parent | prev [-]

There are many aspects to this that people like yourself miss, but I think we need satisfactory answers to them (or at least rigorous explorations of them) before we can make headway in these sorts of discussion.

Imagine we assume that A.I. could be conscious. What would be the identity/scope of that consciousness. To understand what I'm driving at, let's make an analogy to humans. Our consciousness is scoped to our bodies. We see through sense organ, and our brain, which process these signals, is located in a specific point in space. But we still do not know how consciousness arises in the brain and is bound to the body.

If you equate computation of sufficient complexity to consciousness, then the question arises: what exactly about computation would prodcuce consciousness? If we perform the same computation on a different substrate, would that then be the same consciousness, or a copy of the original? If it would not be the same consciousness, then just what give consciousness its identity?

I believe you would find it ridiculous to say that just because we are performing the computation on this chip, therefore the identity of the resulting consciousness is scoped to this chip.

gf000 3 days ago | parent | next [-]

> Imagine we assume that A.I. could be conscious. What would be the identity/scope of that consciousness

Well, first I would ask whether this question makes sense in the first place. Does consciousness have a scope? Does consciousness even exist? Or is that more of a name attributed to some pattern we recognize in our own way of thinking (but may not be universal)?

Also, would a person missing an arm, but having a robot arm they can control have their consciousness' "scope" extended to it? Given that people have phantom pains, does a physical body even needed to consider it your part?

tsimionescu 3 days ago | parent | prev [-]

This all sounds very irrelevant. Consciousness is clearly tied to specific parts of a substrate. My consciousness doesn't change when a hair falls off my head, nor when I cut my fingernails. But it does change in some way if you were to cut the tip of my finger, or if I take a hormone pill.

Similarly, if we can compute consciousness on a chip, then the chip obviously contains that consciousness. You can experimentally determine to what extent this is true: for example, you can experimentally check if increasing the clock frequency of said chip alters the consciousness that it is computing. Or if changing the thermal paste that attaches it to its cooler does so. I don't know what the results of these experiments would be, but they would be quite clearly determined.

Of course, there would certainly be some scale, and at some point it becomes semantics. The same is true with human consciousness: some aspects of the body are more tightly coupled to consciousness than others; if you cut my hand, my consciousness will change more than if you cut a small piece of my bowel, but less than if you cut out a large piece of my brain. At what point do you draw the line and say "consciousness exists in the brain but not the hands"? It's all arbitrary to some extent. Even worse, say I use a journal where I write down some of my most cherished thoughts, and say that I am quite forgetful and I often go through this journal to remind myself of various thoughts before taking a decision. Would it not then be fair to say that the journal itself contains a part of my consciousness? After all, if someone were to tamper with it in subtle enough ways, they would certainly be able to influence my thought process, more so than even cutting off one of my hands, wouldn't they?

prmph 3 days ago | parent [-]

You make some interesting points, but:

> Similarly, if we can compute consciousness on a chip, then the chip obviously contains that consciousness.

This is like claiming that neurons are conscious, which as far as we can tell, they are not. For all you know, it is the algorithm that could be conscious. Or some interplay between the algorithm and the substrate, OR something else.

Another way to think of it problem: Imagine a massive cluster performing computation that is thought to give rise to consciousness. Is is the cluster that is conscious? Or the individual machines, or the chips, or the algorithm, or something else?

I personally don't think any of these can be conscious, but those that do should explain how they figure these thing out.

hackinthebochs 2 days ago | parent | next [-]

>Is is the cluster that is conscious? Or the individual machines, or the chips, or the algorithm, or something else?

The bound informational dynamic that supervenes on the activity of the individual units in the cluster. What people typically miss is that the algorithm when engaged in a computing substrate is not just inert symbols, but an active, potent causal/dynamical structure. Information flows as modulated signals to and from each component and these signals are integrated such that the characteristic property of the aggregate signal is maintained. This binding of signals by the active interplay of component signals from the distributed components realizes the singular identity. If there is consciousness here, it is in this construct.

tsimionescu 3 days ago | parent | prev [-]

I explained the experiments that you would do to figure that out: you modify parts of the system, and check if and how much that affects the consciousness. Paint the interconnects a different color: probably won't affect it. Replace the interconnect protocol with a different one: probably will have some effect. So, the paint on the interconnect: not a part of the consciousness. The interconnect protocol: part of the consciousness. If we are convinced that this is a real consciousness and thus these experiments are immoral, we simply wait until accidents naturally occur and draw conclusions from that, just like we do with human consciousness.

Of course, "the consciousness" is a nebulous concept. It would be like asking "which part of my processor is Windows" to some extent. But it's still fair to say that Windows is contained within my computer, and that the metal framing of the computer is not part of Windows.