Remix.run Logo
almosthere 3 days ago

Well, I think because we know how the code is written, in the sense that humans quite literally wrote the code for it - it's definitely not thinking, and it is literally doing what we asked, based on the data we gave it. It is specifically executing code we thought of. The output of course, we had no flying idea it would work this well.

But it is not sentient. It has no idea of a self or anything like that. If it makes people believe that it does, it is because we have written so much lore about it in the training data.

og_kalu 3 days ago | parent | next [-]

We do not write the code that makes it do what it does. We write the code that trains it to figure out how to do what it does. There's a big difference.

almosthere 3 days ago | parent | next [-]

The code that builds the models and performance inference from it is code we have written. The data in the model is obviously the big trick. But what I'm saying is that if you run inference, that alone does not give it super-powers over your computer. You can write some agentic framework where it WOULD have power over your computer, but that's not what I'm referring to.

It's not a living thing inside the computer, it's just the inference building text token by token using probabilities based on the pre-computed model.

gf000 3 days ago | parent | next [-]

> It's not a living thing inside the computer, it's just the inference building text token by token using probabilities based on the pre-computed model.

Sure, and humans are just biochemical reactions moving muscles as their interface with the physical word.

I think the model of operation is not a good criticism, but please see my reply to the root comment in this thread where I detail my thoughts a bit.

og_kalu 3 days ago | parent | prev | next [-]

You cannot say, 'we know it's not thinking because we wrote the code' when the inference 'code' we wrote amounts to, 'Hey, just do whatever you figured out during training okay'.

'Power over your computer', all that is orthogonal to the point. A human brain without a functioning body would still be thinking.

almosthere 3 days ago | parent [-]

Well, a model by itself with data that emits a bunch of human written words is literally no different than what JIRA does when it reads a database table and shits it out to a screen, except maybe a lot more GPU usage.

I permit you, that yes, the data in the model is a LOT more cool, but some team could by hand, given billions of years (well probably at least 1 Octillion years), reproduce that model and save it to a disk. Again, no different than data stored in JIRA at that point.

So basically if you have that stance you'd have to agree that when we FIRST invented computers, we created intelligence that is "thinking".

og_kalu 3 days ago | parent | next [-]

>Well, a model by itself with data that emits a bunch of human written words is literally no different than what JIRA does when it reads a database table and shits it out to a screen, except maybe a lot more GPU usage.

Obviously, it is different or else we would just use JIRA and a database to replace GPT. Models very obviously do NOT store training data in the weights in the way you are imagining.

>So basically if you have that stance you'd have to agree that when we FIRST invented computers, we created intelligence that is "thinking".

Thinking is by all appearances substrate independent. The moment we created computers, we created another substrate that could, in the future think.

almosthere 3 days ago | parent [-]

But LLMs are effectively a very complex if/else if tree:

if the user types "hi" respond with "hi" or "bye" or "..." you get the point. It's basically storing the most probably following words (tokens) given the current point and its history.

That's not a brain and it's not thinking. It's similar to JIRA because it's stored information and there are if statements (admins can do this, users can do that).

Yes it is more complex, but it's nowhere near the complexity of the human or bird brain that does not use clocks, does not have "turing machines inside", or any of the other complete junk other people posted in this thread.

The information in Jira is just less complex, but it's in the same vein of the data in an LLM, just 10^100 times more complex. Just because something is complex does not mean it thinks.

iainmerrick 2 days ago | parent [-]

This is a pretty tired argument that I don't think really goes anywhere useful or illuminates anything (if I'm following you correctly, it sounds like the good old Chinese Room, where "a few slips of paper" can't possibly be conscious).

Yes it is more complex, but it's nowhere near the complexity of the human or bird brain that does not use clocks, does not have "turing machines inside", or any of the other complete junk other people posted in this thread.

The information in Jira is just less complex, but it's in the same vein of the data in an LLM, just 10^100 times more complex. Just because something is complex does not mean it thinks.

So, what is the missing element that would satisfy you? It's "nowhere near the complexity of the human or bird brain", so I guess it needs to be more complex, but at the same time "just because something is complex does not mean it thinks".

Does it need to be struck by lightning or something so it gets infused with the living essence?

almosthere 2 days ago | parent [-]

Well, at the moment it needs to be born. Nothing else has agency on this planet. So yes, the bar is HIGH. Just because you have a computer that can count beans FAST, it does not mean because you counted a trillion beans that it was an important feat. When LLMs were created it made a lot of very useful software developments. But it is just a large data file that's read in a special way. It has no agency, it does not just start thinking on it's own unless it is programmatically fed data. It has to be triggered to do something.

If you want the best comparison, it's closer to a plant- it reacts ONLY to external stimulous, sunlight, water, etc... but it does not think. (And I'm not comparing it to a plant so you can say - SEE you said it's alive!) It's just a comparison.

MrScruff 2 days ago | parent | prev [-]

You're getting to the heart of the problem here. At what point in evolutionary history does "thinking" exist in biological machines? Is a jumping spider "thinking"? What about consciousness?

hackinthebochs 3 days ago | parent | prev [-]

This is a bad take. We didn't write the model, we wrote an algorithm that searches the space of models that conform to some high level constraints as specified by the stacked transformer architecture. But stacked transformers are a very general computational paradigm. The training aspect converges the parameters to a specific model that well reproduces the training data. But the computational circuits the model picks out are discovered, not programmed. The emergent structures realize new computational dynamics that we are mostly blind to. We are not the programmers of these models, rather we are their incubators.

As far as sentience is concerned, we can't say they aren't sentient because we don't know the computational structures these models realize, nor do we know the computational structures required for sentience.

almosthere 3 days ago | parent [-]

However there is another big problem, this would require a blob of data in a file to be labelled as "alive" even if it's on a disk in a garbage dump with no cpu or gpu anywhere near it.

The inference software that would normally read from that file is also not alive, as it's literally very concise code that we wrote to traverse through that file.

So if the disk isn't alive, the file on it isn't alive, the inference software is not alive - then what are you saying is alive and thinking?

hackinthebochs 3 days ago | parent | next [-]

This is an overly reductive view of a fully trained LLM. You have identified the pieces, but you miss the whole. The inference code is like a circuit builder, it represents the high level matmuls and the potential paths for dataflow. The data blob as the fully converged model configures this circuit builder in the sense of specifying the exact pathways information flows through the system. But this isn't some inert formalism, this is an active, potent causal structure realized by the base computational substrate that is influencing and being influenced by the world. If anything is conscious here, it would be this structure. If the computational theory of mind is true, then there are some specific information dynamics that realize consciousness. Whether or not LLM training finds these structures is an open question.

goatlover 3 days ago | parent | prev | next [-]

A similar point was made by Jaron Lanier in his paper, "You can't argue with a Zombie".

electrograv 3 days ago | parent | prev [-]

> So if the disk isn't alive, the file on it isn't alive, the inference software is not alive - then what are you saying is alive and thinking?

“So if the severed head isn’t alive, the disembodied heart isn’t alive, the jar of blood we drained out isn’t alive - then what are you saying is alive and thinking?”

- Some silicon alien life forms somewhere debating whether the human life form they just disassembled could ever be alive and thinking

almosthere 2 days ago | parent [-]

Just because you saw a "HA - He used an argument that I can compare to a dead human" does not make your argument strong - there are many differences from a file on a computer vs a murdered human that will never come back and think again.

mbesto 3 days ago | parent | prev | next [-]

I think the discrepancy is this:

1. We trained it on a fraction of the world's information (e.g. text and media that is explicitly online)

2. It carries all of the biases us humans have and worse the biases that are present in the information we chose to explicitly share online (which may or may not be different to the experiences humans have in every day life)

nix0n 3 days ago | parent | next [-]

> It carries all of the biases us humans have and worse the biases that are present in the information we chose to explicitly share online

This is going to be a huge problem. Most people assume computers are unbiased and rational, and increasing use of AI will lead to more and larger decisions being made by AI.

aryehof 2 days ago | parent | prev [-]

I see this a lot in what LLMs know and promote in terms of software architecture.

All seem biased to recent buzzwords and approaches. Discussions will include the same hand-waving of DDD, event-sourcing and hexagonal services, i.e. the current fashion. Nothing of worth apparently preceded them.

I fear that we are condemned to a future where there is no new novel progress, but just a regurgitation of those current fashion and biases.

abakker 3 days ago | parent | prev [-]

and then the code to give it context. AFAIU, there is a lot of post training "setup" in the context and variables to get the trained model to "behave as we instruct it to"

Am I wrong about this?

gf000 3 days ago | parent | prev | next [-]

Well, unless you believe in some spiritual, non-physical aspect of consciousness, we could probably agree that human intelligence is Turing-complete (with a slightly sloppy use of terms).

So any other Turing-complete model can emulate it, including a computer. We can even randomly generate Turing machines, as they are just data. Now imagine we are extremely lucky and happen to end up with a super-intelligent program which through the mediums it can communicate (it could be simply text-based but a 2D video with audio is no different for my perspective) can't be differentiated from a human being.

Would you consider it sentient?

Now replace the random generation with, say, a back propagation algorithm. If it's sufficiently large, don't you think it's indifferent from the former case - that is, novel qualities could emerge?

With that said, I don't think that current LLMs are anywhere close to this category, but I just don't think this your reasoning is sound.

DanHulton 3 days ago | parent | next [-]

> we could probably agree that human intelligence is Turing-complete (with a slightly sloppy use of terms). > So any other Turing-complete model can emulate it

You're going off the rails IMMEDIATELY in your logic.

Sure, one Turing-complete computer language can have its logic "emulated" by another, fine. But human intelligence is not a computer language -- you're mixing up the terms "Turing complete" and "Turing test".

It's like mixing up the terms "Strawberry jam" and "traffic jam" and then going on to talk about how cars taste on toast. It's nonsensical.

gf000 3 days ago | parent [-]

Game of life, PowerPoint, and a bunch of non-PL stuff are all Turing-complete. I don't mix up terms, I did use a slightly sloppy terminology but it is the correct concept - and my point is that we don't know of a computational model that can't be expressed by a Turing-machine, humans are a physical "machine", ergo we must also fall into that category.

Give my comment another read, but it was quite understandable from context. (Also, you may want to give a read to the Turing paper because being executable by a person as well was an important concept within)

DanHulton 2 days ago | parent | next [-]

Again, you're going wildly off the rails in your logic. Sure, "executable by a human" is part of the definition for Turing machines, but that's only talking about Turing-specific capabilities. If you want to argue that a Turing machine can emulate the specific definition of Turing machine capabilities that humans can perform, that's fine. But you're saying that because humans can ACT LIKE Turing machines, they must BE Turing machines, and are therefore emulatable.

This is the equivalent of saying "I have set up a complex mechanical computer powered by water that is Turing complete. Since any Turing complete system can emulate another one, it means that any other Turing complete system can also make things wet and irrigate farms.

Human intelligence is not understood. It can be made to do Turing complete things, but you can't invert that and say that because you've read the paper on Turing completeness, you now understand human intelligence.

coopierez 2 days ago | parent | prev [-]

But humans can do things Turing machines cannot. Such as eating a sandwich.

gf000 2 days ago | parent [-]

That's not a computation, it's a side effect. It just depends on what you wire your "computer" up to. A Turing machine in itself is just a (potentially non-returning) mathematical function, but you are free to map any input/output to it.

Actually, the way LLMs are extended with tools is a pretty much the same (an LLM itself has no access to the internet, but if it returns some specific symbols, the external "glue" will do a search and then the LLM is free to use the results)

almosthere 3 days ago | parent | prev | next [-]

We used to say "if you put a million monkeys on typewriters you would eventually get shakespear" and no one would ever say that anymore, because now we can literally write shakespear with an LLM.

And the monkey strategy has been 100% dismissed as shit..

We know how to deploy monkeys on typewriters, but we don't know what they'll type.

We know how to deploy transformers to train and inference a model, but we don't know what they'll type.

We DON'T know how a thinking human (or animal) brain works..

Do you see the difference.

nearbuy 3 days ago | parent | next [-]

The monkeys on typewriters saying is just a colorful way of saying that an infinite random sequence will contain all finite sequences somewhere within it. Which is true. But I don't see what infinite random sequences have to do with LLMs or human thinking.

> Do you see the difference

No? I'm not sure what you're getting at.

procaryote 3 days ago | parent | prev | next [-]

To be fair, we also trained the LLM on (among other things) shakespeare, and adjusted the weights so that generating shakespeare would be more likely after that training.

We don't claim a JPEG can paint great art, even though certain jpegs do.

almosthere 3 days ago | parent [-]

So, more proof it's not thinking, right? It can only regurgitate a large if/else superstructure with some jumping around.

procaryote 2 days ago | parent [-]

Who truly knows if you can make an if-else + randomness structure big enough to become smart?

But yes, we built a machine that generates text similar to what we built it from, and now we're looking at it generating text and are all impressed.

KoolKat23 3 days ago | parent | prev [-]

I was going to use this analogy in the exact opposite way. We do have a very good understanding of how the human brain works. Saying we don't understand how the brain works is like saying we don't understand how the weather works.

If you put a million monkeys on typewriters you would eventually get shakespeare is exactly why LLM's will succeed and why humans have succeeded. If this weren't the case why didn't humans 30000 years ago create spacecraft if we were endowed with the same natural "gift".

almosthere 3 days ago | parent [-]

Yeah no, show me one scientific paper that says we know how the brain works. And not a single neuron because that does absolute shit towards understanding thinking.

KoolKat23 3 days ago | parent [-]

This is exactly why I mentioned the weather.

A scientific paper has to be verifiable, you should be able to recreate the experiment and come to the same conclusion. It's very very difficult to do with brains with trillions of parameters and that can't be controlled to the neuron level. Nothwithstanding the ethical issues.

We don't have a world weather simulator that is 100% accurate either given the complex interplay and inability to control the variables i.e. it's not verifiable. It'd be a bit silly to say we don't know why it's going to rain at my house tomorrow.

Until then it is a hypothesis, and we can't say we know even if the overwhelming evidence indicates that in fact that we do know.

myrmidon 3 days ago | parent | prev | next [-]

> Would you consider it sentient?

Absolutely.

If you simulated a human brain by the atom, would you think the resulting construct would NOT be? What would be missing?

I think consciousness is simply an emergent property of our nervous system, but in order to express itself "language" is obviously needed and thus requires lots of complexity (more than what we typically see in animals or computer systems until recently).

prmph 3 days ago | parent [-]

> If you simulated a human brain by the atom,

That is what we don't know is possible. You don't even know what physics or particles are as yet undiscovered. And from what we even know currently, atoms are too coarse to form the basis of such "cloning"

And, my viewpoint is that, even if this were possible, just because you simulated a brain atom by atom, does not mean you have a consciousness. If it is the arrangement of matter that gives rise to consciousness, then would that new consciousness be the same person or not?

If you have a basis for answering that question, let's hear it.

myrmidon 3 days ago | parent | next [-]

> You don't even know what physics or particles are as yet undiscovered

You would not need the simulation to be perfect; there is ample evidence that our brains a quite robust against disturbances.

> just because you simulated a brain atom by atom, does not mean you have a consciousness.

If you don't want that to be true, you need some kind of magic, that makes the simulation behave differently from reality.

How would a simulation of your brain react to an question that you would answer "consciously"? If it gives the same responds to the same inputs, how could you argue it isnt't conscious?

> If it is the arrangement of matter that gives rise to consciousness, then would that new consciousness be the same person or not?

The simulated consciousness would be a different one from the original; both could exist at the same time and would be expected to diverge. But their reactions/internal state/thoughts could be matched at least for an instant, and be very similar for potentially much longer.

I think this is just Occams razor applied to our minds: There is no evidence whatsoever that our thinking is linked to anything outside of our brains, or outside the realm of physics.

prmph 3 days ago | parent | next [-]

> "quite robust against disturbances."

does not mean that the essential thing gives rise to consciousness is only approximate. To give an example from software, you can write software is robust against bad input, attempts to crash it, even bit flips. But, if I came in and just changed a single character in the source code, that may cause it to fail compilation, fail to run, or become quite buggy.

> If you don't want that to be true, you need some kind of magic,

This is just what I'm saying is a false dichotomy. The only reason some are unable to see beyond it is that we think the basic logic we understand are all there could be.

In this respect physics has been very helpful, because without peering into reality, we would have kept deluding ourselves that pure reason was enough to understand the world.

It's like trying to explain quantum mechanics to a well educated person or scientist from the 16th century without the benefit of experimental evidence. No way they'd believe you. In fact, they'd accuse you of violating basic logic.

myrmidon 3 days ago | parent [-]

How is it a false dichotomy? If you want consciousness to NOT be simulateable, then you need some essential component to our minds that can't be simulated (call it soul or whatever) and for that thing to interface with our physical bodies (obviously).

We have zero evidence for either.

> does not mean that the essential thing gives rise to consciousness is only approximate

But we have 8 billion different instances that are presumably conscious; plenty of them have all kinds of defects, and the whole architecture has been derived by a completely mechanical process free of any understanding (=> evolution/selection).

On the other hand, there is zero evidence of consciousness continuing/running before or after our physical brains are operational.

prmph 3 days ago | parent [-]

> plenty of them have all kinds of defects,

Defects that have not rendered them unconscious, as long as they still are alive. You seem not to see the circularity of your argument.

I gave you an example to show that robustness against adverse conditions is NOT the same as internal resiliency. Those defect, as far as we know, are not affecting the origin of consciousness itself. Which is my point.

> How is it a false dichotomy? If you want consciousness to NOT be simulateable, then you need some essential component to our minds that can't be simulated (call it soul or whatever) and for that thing to interface with our physical bodies (obviously).

If you need two things to happen at the same time in sync with each other no matter if they are separated by billions of miles, then you need faster-than-light travel, or some magic [1]; see what I did there?

1. I.e., quantum entanglement

myrmidon 3 days ago | parent | next [-]

> If you need two things to happen at the same time in sync with each other no matter if they are separated by billions of miles, then you need faster-than-light travel, or some magic [1]; see what I did there?

No. Because even if you had solid evidence for the hypothesis that quantum mechanical effects are indispensable in making our brains work (which we don't), then that is still not preventing simulation. You need some uncomputable component, which physics right now neither provides nor predicts.

And fleeing into "we don't know 100% of physics yet" is a bad hypothesis, because we can make very accurate physical predictions already-- you would need our brains to "amplify" some very small gap in our physical understanding, and this does not match with how "robust" the operation of our brain is-- amplifiers, by their very nature, are highly sensitive to disruption or disturbances, but a human can stay conscious even with a particle accelerator firing through his brain.

tsimionescu 3 days ago | parent | prev [-]

> If you need two things to happen at the same time in sync with each other no matter if they are separated by billions of miles, then you need faster-than-light travel, or some magic [1]

This makes no sense as written - by definition, there is no concept of "at the same time" for events that are spacelike separated like this. Quantum entanglement allows you to know something about the statistical outcomes of experiments that are carried over a long distance away from you, but that's about it (there's a simpler version, where you can know some facts for certain, but that one actually looks just like classical correlation, so it's not that interesting on its own).

I do get the point that we don't know what we don't know, so that a radical new form of physics, as alien to current physics as quantum entanglement is to classical physics, could exist. But this is an anti-scientific position to take. There's nothing about consciousness that breaks any known law of physics today, so the only logical position is to suppose that consciousness is explainable by current physics. We can't go around positing unknown new physics behind every phenomenon we haven't entirely characterized and understood yet.

prmph 2 days ago | parent [-]

> There's nothing about consciousness that breaks any known law of physics today, so the only logical position is to suppose that consciousness is explainable by current physics

Quite the claim to make

tsimionescu 2 days ago | parent [-]

Is it? It's quite uncontroversial I think that consciousness has no special impact in physics, there's no physical experiment that is affected by a consciousness being present or not. Electrons don't behave differently if a human is looking at them versus a machine, as far as any current physical experiment has ever found.

If we agree on this, then it follows logically that we don't need new physics to explain consciousnesses. I'm not claiming it's impossible that consciousness is created by physics we don't yet know - just claiming that it's also not impossible that it's not. Similarly, we don't fully understand the pancreas, and it could be that the pancreas works in a way that isn't fully explainable by current physics - but there's currently no reason to believe that, so we shouldn't assume that.

prmph a day ago | parent [-]

> It's quite uncontroversial I think that consciousness has no special impact in physics, there's no physical experiment that is affected by a consciousness being present or not. Electrons don't behave differently if a human is looking at them versus a machine, as far as any current physical experiment has ever found.

Way to totally miss the point. We can't detect or measure consciousness, so therefore there is nothing to explain. /s Like an LLM that deletes or emasculates tests it is unable to make pass.

I know I am conscious, I also know that the stone in my hand is not. I want to understand why. It is probably the most unexplainable thing. It does not mean we ignore it. If you want to dispute that my consciousness has no physical import nor consequence, well, then we will have to agree to disagree.

tsimionescu a day ago | parent [-]

My point is this: find a physical experiment that can't be entirely explained by the physical signs of consciousness (e.g. electrochemical signals in the brain). As long as none can be found, there is no reason to believe that new physics is required to explain consciousness - my own or yours.

uwagar 3 days ago | parent | prev [-]

dude u need to do some psychedelics.

gf000 3 days ago | parent | prev | next [-]

Well, if you were to magically make an exact replica of a person, wouldn't it be conscious and at time 0 be the same person?

But later on, he would get different experiences and become a different person no longer identical to the first.

In extension, I would argue that magically "translating" a person to another medium (e.g. a chip) would still make for the same person, initially.

Though the word "magic" does a lot of work here.

prmph 3 days ago | parent [-]

I'm not talking about "identical" consciousnesses. I mean the same consciousness. The same consciousness cannot split into two, can it?

Either it is (and continues to be) the same consciousness, or it is not. If it were the same consciousness, then you would have a person who exists in two places at once.

tsimionescu 3 days ago | parent | next [-]

Well, "the same consciousness" it's not, as for example it occupies a different position in spacetime. It's an identical copy for a split second, and then they start diverging. Nothing so deep about any of this. When I copy a file from one disk to another, it's not the same file, they're identical copies for some time (usually, assuming no defects in the copying process), and will likely start diverging afterwards.

pka 2 days ago | parent [-]

It might be deeper than you think.

Qualia exist "outside" spacetime, e.g. redness doesn't have a position in spacetime. If consciousness is purely physical, then how can two identical systems (identical brains with identical sensory input) giving rise by definition to the same qualia not literally be the same consciousness?

tsimionescu 2 days ago | parent [-]

> Qualia exist "outside" spacetime, e.g. redness doesn't have a position in spacetime.

I'm sensing redness here and now, so the sensation of redness exists very clearly tied to a particular point in spacetime. In what sense is the qualia of redness not firmly anchored in spacetime? Of course, you could talk about the concept redness, like the concept Pi, but even then, these concepts exist in the mind of a human thinking about them, still tied to a particular location in spacetime.

> If consciousness is purely physical, then how can two identical systems (identical brains with identical sensory input) giving rise by definition to the same qualia not literally be the same consciousness?

The two brains don't receive the same sensory inputs, nothing in the experiment says they do. From the second right after the duplicate is created, their sensory inputs diverge, and so they become separate consciousnesses with the same history. They are interchangeable initially, if you gave the same sensory inputs to either of them, they would have the same output (even internally). But, they are not identical: giving some sensory input to one of them will not create any effect directly in the other one.

pka a day ago | parent [-]

> I'm sensing redness here and now, so the sensation of redness exists very clearly tied to a particular point in spacetime. In what sense is the qualia of redness not firmly anchored in spacetime? Of course, you could talk about the concept redness, like the concept Pi, but even then, these concepts exist in the mind of a human thinking about them, still tied to a particular location in spacetime.

But qualia are inherently subjective. You can correlate brain activity (which exists at a position in spacetime) to subjective experience, but that experience is not related to spacetime.

Said otherwise: imagine you are in the Matrix at a coffee shop and sense redness, but your brain is actually in a vat somewhere being fed fake sensory input. "Where" is the redness? You would clearly say that it arises in your brain in the coffee shop. Imagine then the vat is moved (so its position in spacetime changes), your brain is rolled back to its previous state, and then fed the same sensory input again. Where is the redness now?

You can't differentiate the two sensations of redness based on the actual position of the brain in spacetime. For all intents and purposes, they are the same. Qualia only depend on the internal brain state at a point in time and on the sensory input. Spacetime is nowhere to be found in that equation.

> The two brains don't receive the same sensory inputs

But let's say they do. Identical brains, identical inputs = identical qualia. What differentiates both consciousnesses?

tsimionescu a day ago | parent [-]

> But let's say they do. Identical brains, identical inputs = identical qualia. What differentiates both consciousnesses?

I'll start with this, because it should help with the other item. We know there are two identical consciousnesses exactly because they are separate in spacetime. That is, while I can send the same input to both and get the same mind, that's not the interesting thing. The interesting thing is that I also can send different inputs, and then I'll get different minds. If it really were a single consciousness, that would be impossible. For example, you can't feed me both pure redness and pure greenness at the same time, so I am a single consciousness.

Here is where we get back to the first item: if we accepted that qualia are not localized in spacetime, we'd have to accept that there is no difference between me experiencing redness and you experiencing redness. Even if you consider that your qualia are separate from my own because of our different contexts, that still doesn't fully help: perhaps two different beings on two different planets happen to lead identical lives up to some point when a meteorite hits one of the planets and gravely injures one of their bodies. Would you say that there was a single consciousness that both bodies shared, but that it suddenly split once the meteorite hit?

Now, that is a valid position to take, in some sense. But then that means that consciousness is not continuous in any way, in your view. The day the meteorite hit planet A is not special in any way for planet B. So, if the single consciousness that planet A and planet B shared stopped that day, only to give rise to two different consciousnesses, that means that this same phenomenon must happen every day, and in fact at every instant of time. So, we now must accept that any feeling of time passing must be a pure illusion, since my consciousness now is a completely different consciousness than then one that experienced the previous minute. While this is a self-consistent definition, it's much more alien than the alternative - where we would accept that consciousness is tied in spacetime to its substrate.

pka 20 hours ago | parent [-]

> Would you say that there was a single consciousness that both bodies shared, but that it suddenly split once the meteorite hit?

I agree, this is super weird. In a sense this seems to be the difference between viewing consciousness from the first person vs the third person. But until we understand how (if at all) matter generates felt experience the latter view can not, by definition, be about consciousness itself.

I guess this kind of perspective commits one to viewing first person experience in the way we understand abstract concepts - it is nonsensical to ask what the difference between this "1" here and that other "1" over there is. Well, you can say, they are at different positions and written in different materials etc, but those are not properties of the concept "1" anymore.

So yes, coming back to the thought experiment, one of the consequences of that would have to be that both bodies share the same consciousness and the moment something diverges the consciousnesses do too.

The point about time is interesting, and also directly related to AI. If at some point machines become conscious (leaving aside the question if that's possible at all and how we would know without solving the aforementioned hard problem), they would presumably have to generate quanta at discrete steps. But is that so strange? The nothingness in between would not be felt (kind of like going to sleep and waking up "the next moment").

But maybe this idea can be applied to dynamical continuous systems as well, like the brain.

(Btw this conversation was super interesting, thank you!)

gf000 3 days ago | parent | prev [-]

Consciousness has no agreed upon definition to begin with, but I like to think of it as to what a whirlwind is to a bunch of air molecules (that is, an example of emergent behavior)

So your question is, are two whirlwinds with identical properties (same speed, same direction, shape etc) the same in one box of air, vs another identical box?

prmph 3 days ago | parent [-]

Exactly, I guess this starts to get into philosophical questions around identity real quick.

To me, two such whirlwinds are identical but not the same. They are the same only if they are guaranteed to have the same value for every conceivable property, forever, and even this condition may not be enough.

quantum_state 3 days ago | parent | prev [-]

At some point, quantum effects will need to be accounted for. The no cloning theorem will make it hard to replicate the quantum state of the brain.

prmph 3 days ago | parent | prev [-]

There are many aspects to this that people like yourself miss, but I think we need satisfactory answers to them (or at least rigorous explorations of them) before we can make headway in these sorts of discussion.

Imagine we assume that A.I. could be conscious. What would be the identity/scope of that consciousness. To understand what I'm driving at, let's make an analogy to humans. Our consciousness is scoped to our bodies. We see through sense organ, and our brain, which process these signals, is located in a specific point in space. But we still do not know how consciousness arises in the brain and is bound to the body.

If you equate computation of sufficient complexity to consciousness, then the question arises: what exactly about computation would prodcuce consciousness? If we perform the same computation on a different substrate, would that then be the same consciousness, or a copy of the original? If it would not be the same consciousness, then just what give consciousness its identity?

I believe you would find it ridiculous to say that just because we are performing the computation on this chip, therefore the identity of the resulting consciousness is scoped to this chip.

gf000 3 days ago | parent | next [-]

> Imagine we assume that A.I. could be conscious. What would be the identity/scope of that consciousness

Well, first I would ask whether this question makes sense in the first place. Does consciousness have a scope? Does consciousness even exist? Or is that more of a name attributed to some pattern we recognize in our own way of thinking (but may not be universal)?

Also, would a person missing an arm, but having a robot arm they can control have their consciousness' "scope" extended to it? Given that people have phantom pains, does a physical body even needed to consider it your part?

tsimionescu 3 days ago | parent | prev [-]

This all sounds very irrelevant. Consciousness is clearly tied to specific parts of a substrate. My consciousness doesn't change when a hair falls off my head, nor when I cut my fingernails. But it does change in some way if you were to cut the tip of my finger, or if I take a hormone pill.

Similarly, if we can compute consciousness on a chip, then the chip obviously contains that consciousness. You can experimentally determine to what extent this is true: for example, you can experimentally check if increasing the clock frequency of said chip alters the consciousness that it is computing. Or if changing the thermal paste that attaches it to its cooler does so. I don't know what the results of these experiments would be, but they would be quite clearly determined.

Of course, there would certainly be some scale, and at some point it becomes semantics. The same is true with human consciousness: some aspects of the body are more tightly coupled to consciousness than others; if you cut my hand, my consciousness will change more than if you cut a small piece of my bowel, but less than if you cut out a large piece of my brain. At what point do you draw the line and say "consciousness exists in the brain but not the hands"? It's all arbitrary to some extent. Even worse, say I use a journal where I write down some of my most cherished thoughts, and say that I am quite forgetful and I often go through this journal to remind myself of various thoughts before taking a decision. Would it not then be fair to say that the journal itself contains a part of my consciousness? After all, if someone were to tamper with it in subtle enough ways, they would certainly be able to influence my thought process, more so than even cutting off one of my hands, wouldn't they?

prmph 3 days ago | parent [-]

You make some interesting points, but:

> Similarly, if we can compute consciousness on a chip, then the chip obviously contains that consciousness.

This is like claiming that neurons are conscious, which as far as we can tell, they are not. For all you know, it is the algorithm that could be conscious. Or some interplay between the algorithm and the substrate, OR something else.

Another way to think of it problem: Imagine a massive cluster performing computation that is thought to give rise to consciousness. Is is the cluster that is conscious? Or the individual machines, or the chips, or the algorithm, or something else?

I personally don't think any of these can be conscious, but those that do should explain how they figure these thing out.

hackinthebochs 2 days ago | parent | next [-]

>Is is the cluster that is conscious? Or the individual machines, or the chips, or the algorithm, or something else?

The bound informational dynamic that supervenes on the activity of the individual units in the cluster. What people typically miss is that the algorithm when engaged in a computing substrate is not just inert symbols, but an active, potent causal/dynamical structure. Information flows as modulated signals to and from each component and these signals are integrated such that the characteristic property of the aggregate signal is maintained. This binding of signals by the active interplay of component signals from the distributed components realizes the singular identity. If there is consciousness here, it is in this construct.

tsimionescu 3 days ago | parent | prev [-]

I explained the experiments that you would do to figure that out: you modify parts of the system, and check if and how much that affects the consciousness. Paint the interconnects a different color: probably won't affect it. Replace the interconnect protocol with a different one: probably will have some effect. So, the paint on the interconnect: not a part of the consciousness. The interconnect protocol: part of the consciousness. If we are convinced that this is a real consciousness and thus these experiments are immoral, we simply wait until accidents naturally occur and draw conclusions from that, just like we do with human consciousness.

Of course, "the consciousness" is a nebulous concept. It would be like asking "which part of my processor is Windows" to some extent. But it's still fair to say that Windows is contained within my computer, and that the metal framing of the computer is not part of Windows.

mirekrusin 3 days ago | parent | prev | next [-]

Now convince us that you’re sentient and not just regurgitating what you’ve heard and seen in your life.

embedding-shape 3 days ago | parent | next [-]

By what definition of "sentience"? Wikipedia claims "Sentience is the ability to experience feelings and sensations" as an opening statement, which I think would be trivial depending again on your definition of "experience" and "sensations". Can a LLM hooked up to sensor events be considered to "experience sensations"? I could see arguments both ways for that.

vidarh 3 days ago | parent [-]

I have no way of measuring whether or not you experience feelings and sensations, or are just regurgitating statements to convince me of that.

The only basis I have for assuming you are sentient according to that definition is trust in your self-reports.

darkwater 3 days ago | parent | next [-]

> The only basis I have for assuming you are sentient according to that definition is trust in your self-reports

Because the other person is part of your same species so you project your own base capabilities onto them, because so far they shown to behave pretty similarly to how you behave. Which is the most reasonable thing to do.

Now, the day we have cyborgs that mimic also the bodies of a human a la Battlestar Galactica, we will have an interesting problem.

vidarh 3 days ago | parent [-]

It's the most reasonable thing to do because we have no actual way of measuring and knowing. It is still speculation.

embedding-shape 3 days ago | parent | prev [-]

I'm fairly sure we can measure human "sensation" as in detect physiological activity in the body in someone who is under anesthesia yet the body reacts in different ways to touch or pain.

The "feelings" part is probably harder though.

vidarh 3 days ago | parent | next [-]

We can measure the physiological activity, but not whether it gives rise to the same sensations that we experience ourselves. We can reasonably project and guess that they are the same, but we can not know.

In practical terms it does not matter - it is reasonable for us to act as if others do experience the same we do. But if we are to talk about the nature of conscience and sentience it does matter that the only basis we have for knowing about other sentient beings is their self-reported experience.

goatlover 3 days ago | parent [-]

We know that others do not experience the exact same sensations, because there are reported differences, some of which has been discussed on HN, such as aphantasia. The opposite would be visual thinkers. Then you have super tasters and smellers, people who have very refined palats, perhaps because their gustary and/or oilfactory senses are more heightened. Then you have savants like the musical genius who would hear three separate strands of music in his head at the same time.

mirekrusin 2 days ago | parent | prev [-]

You can measure model activity even better.

How do you know that model processing text or image input doesn't go through feeling of confusion or excitement or corrupted image doesn't "smell" right for it?

Just the fact that you can pause and restart it doesn't mean it doesn't emerge.

3 days ago | parent | prev [-]
[deleted]
kakapo5672 3 days ago | parent | prev | next [-]

It's not accurate to say we "wrote the code for it". AI isn't built like normal software. Nowhere inside an AI will you find lines of code that say If X Then Y, and so on.

Rather, these models are literally grown during the training phase. And all the intelligence emerges from that growth. That's what makes them a black box and extremely difficult to penetrate. No one can say exactly how they work inside for a given problem.

Llamamoe 3 days ago | parent | prev | next [-]

This is probably true. But the truth is we have absolutely no idea what sentience is and what gives rise to it. We cannot identify why humans have it rather than just being complex biological machines, or whether and why other animals do. We have no idea what the rules or, nevermind how and why they would or wouldn't apply to AI.

mentos 3 days ago | parent | prev | next [-]

What’s crazy to me is the mechanism of pleasure or pain. I can understand that with enough complexity we can give rise to sentience but what does it take to achieve sensation?

dontwearitout 3 days ago | parent | next [-]

This is the "hard problem of consciousness". It's more important than ever as machines begin to act more like humans, but my takeaway is we have no idea. https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

vidarh 3 days ago | parent | prev | next [-]

Input is input. There's no reason why we should assume that a data source from embodiment is any different to any other data source.

spicyusername 3 days ago | parent | prev | next [-]

A body

mentos 3 days ago | parent | next [-]

I’d say it’s possible to experience mental anguish/worry without the body participating. Solely a cognitive pain from consternation.

AndrewKemendo 3 days ago | parent [-]

You can’t cognate without a body - the brain and body is a material system tightly coupled

vidarh 3 days ago | parent [-]

Ignoring that "cognate" isn't a verb, we have basis for making any claim about the necessity of that coupling.

exe34 3 days ago | parent | prev [-]

How does a body know what's going on? Would you say it has any input devices?

kbrkbr 3 days ago | parent | prev [-]

Can you tell me how you understand that?

Because I sincerely do not. I have frankly no idea how sentience arises from non sentience. But it's a topic that really interests me.

mentos 3 days ago | parent [-]

We have examples of non sentience everywhere already with animals. And then an example of sentience with humans. So if you diff our brains the difference lies within a module in our prefrontal cortex. It’s a black box of logic but I can ‘understand’ or be willing to accept that it’s owed to ‘just’ more grey matter adding the self awareness to the rest of the system.

But to me the big mystery is how animals have sensation at all to begin with. What gives rise to that is a greater mystery to me personally.

There are examples of people who have no ability to feel pain yet are still able to think. Now I wonder if they ever experience mental anguish.

DoctorOetker 3 days ago | parent [-]

I'd like to see a vote here, what percentage of HN readers believe animals have sentience or no sentience?

Clearly most animals are less educated, and most are less intelligent, but non-sentient? That sounds like 200-year old claims that "when one steps on the tail of a cat, it does indeed protest loudly, but not because it feels anything or because it would be sentient, no, no, it protests merely due to selective pressure, programming reflex circuits, since other creatures would show compassion, or back off due to a potential reaction by the cat."

Anyone who has had a pet like a cat or a dog knows they are sentient... if we consider ourselves sentient.

kbrkbr 3 days ago | parent | next [-]

I'm with you on this.

But asked for reasons I can only point to the social nature of their societies, where love and anger make sense, or of their hurt-behavior.

I also find it very hard to believe that everything else is slow evolution of components, and here all of a sudden something super complex comes into being out of nowhere.

But I still have no idea how it could work. What are the components and their interplay?

mentos 2 days ago | parent | prev [-]

I should have been more exact and said sentience vs sapience in animals vs humans.

PaulDavisThe1st 3 days ago | parent | prev | next [-]

> But it is not sentient. It has no idea of a self or anything like that.

Who stated that sentience or sense of self is a part of thinking?

marstall 3 days ago | parent | prev | next [-]

Unless the idea of us having a thinking self is just something that comes out of our mouth, an artifact of language. In which case we are not that different - in the end we all came from mere atoms, after all!

dist-epoch 3 days ago | parent | prev [-]

Your brain is just following the laws of chemistry. So where is your thinking found in a bunch of chemical reactions?