Remix.run Logo
frozenlettuce 3 days ago

You can replicate all calculations done by LLMs with pen and paper. It would take ages to calculate anything, but it's possible. I don't think that pen and paper will ever "think", regardless of how complex the calculations involved are.

gus_massa 3 days ago | parent | next [-]

The official name is https://en.wikipedia.org/wiki/Chinese_room

The opinions are exactly the same than about LLM.

sigmoid10 3 days ago | parent | next [-]

And the counter argument is also exactly the same. Imagine you take one neuron from a brain and replace it with an artificial piece of electronics (e.g. some transistors) that only generates specific outputs based on inputs, exactly like the neuron does. Now replace another neuron. And another. Eventually, you will have the entire brain replaced with a huge set of fundamentally super simple transistors. I.e. a computer. If you believe that consciousness or the ability to think disappears somewhere during this process, then you are essentially believing in some religious meta-physics or soul-like component in our brains that can not be measured. But if it can not be measured, it fundamentally can not affect you in any way. So it doesn't matter for the experiment in the end, because the outcome would be exactly the same. The only reason you might think that you are conscious and the computer is not is because you believe so. But to an outsider observer, belief is all it is. Basically religion.

kipchak 3 days ago | parent | next [-]

It seems like the brain "just" being a giant number of neurons is an assumption. As I understand it's still an area of active research, for example the role of glial cells. The complete function may or may not be pen and paper-able.

thrance 2 days ago | parent | next [-]

There are indeed many people trying to justify this magical thinking by seeking something, anything in the brain that is out of the ordinary. They've been unsuccessful so far.

Penrose comes to mind, he will die on the hill that the brain involves quantum computations somehow, to explain his dualist position of "the soul being the entity responsible for deciding how the quantum states within the brain collapse, hence somehow controlling the body" (I am grossly simplifying). But even if that was the case, if the brain did involve quantum computations, those are still, well, computable. They just involve some amount of randomness, but so what? To continue with grandparent's experiment, you'd have to replace biological neurons with tiny quantum computer neurons instead, but the gist is the same.

sigmoid10 2 days ago | parent [-]

You wouldn't even need quantum computer neurons. We can simulate quantum nature on normal circuits, albeit not very efficiently. But for the experiment this wouldn't matter. The only important thing would be that you can measure it, which in turn would allow you to replicate it in some non-human circuit. And if you fundamentally can't measure this aspect for some weird reason, you will once again reach the same conclusion as above.

thrance 2 days ago | parent [-]

You can simulate it, but you usually use PRNG to decide how your simulated wave function "collapses". So in the spirit of the original thought experiment, I felt it more adequate to replace the quantum part (if it even exists) by another actually quantum part. But indeed, using fake quantum shouldn't change a thing.

Tadpole9181 3 days ago | parent | prev [-]

> The complete function may or may not be pen and paper-able.

Would you mind expanding on this? At a base read, it seems you implying magic exists.

kipchak 3 days ago | parent [-]

It could well be the case that the brain can be simulated, but presently we don't know exactly what variables/components must be simulated. Does ongoing neuroplasticity for example need to be a component of simulation? Is there some as of yet unknown causal mechanisms or interactions that may be essential?

Tadpole9181 3 days ago | parent [-]

None of those examples cannot be done on pen and paper or otherwise simulated with a different medium, though.

AFAICT, your comment above would need some mechanism that is physically impossible and incalculable to make the argument, and then somehow have that happen in a human brain despite being physically impossible and incalculable.

bigfishrunning 3 days ago | parent | prev | next [-]

> component in our brains that can not be measured.

"Can not be measured", probably not. "We don't know how to measure", almost certainly.

I am capable of belief, and I've seen no evidence that the computer is. It's also possible that I'm the only person that is conscious. It's even possible that you are!

danaris 3 days ago | parent | prev [-]

But you are now arguing against a strawman, namely, "it is not possible to construct a computer that thinks".

The argument that was actually made was "LLMs do not think".

umanwizard 3 days ago | parent [-]

A: X, because Y

B: But Y would also imply Z

C: A was never arguing for Z! This is a strawman!

danaris 3 days ago | parent [-]

"LLMs cannot think like brains" does not imply "no computer it will ever be possible to construct could think like a brain".

umanwizard 3 days ago | parent [-]

“LLMs cannot think like brains” is “X”.

danaris 2 days ago | parent [-]

That appears to be your own assumptions coming into play.

Everything I've seen says "LLMs cannot think like brains" is not dependent on an argument that "no computer can think like a brain", but rather on an understanding of just what LLMs are—and what they are not.

circuit10 3 days ago | parent | prev [-]

I don’t understand why people say the Chinese Room thing would prove LLMs don’t think, to me it’s obvious that the person doesn’t understand Chinese but the process does, similarly the CPU itself doesn’t understand the concepts an LLM can work with but the LLM itself does, or a neuron doesn’t understand concepts but the entire structure of your brain does

The concept of understanding emerges on a higher level from the way the neurons (biological or virtual) are connected, or the way the instructions being followed by the human in the Chinese room process the information

But really this is a philosophical/definitional thing about what you call “thinking”

Edit: I see my take on this is listed on the page as the “System reply”

Kim_Bruning 3 days ago | parent [-]

If 100 top-notch philosophers disagree with you, that means you get 100 citations from top-notch philosophers. :-P

Check out eg Dennett.... or ... his opionions about Searle; Have fun with eg... this:

"By Searle’s own count, there are over a hundred published attacks on it. He can count them, but I guess he can’t read them, for in all those years he has never to my knowledge responded in detail to the dozens of devastating criticisms they contain;"

https://www.nybooks.com/articles/1995/12/21/the-mystery-of-c...

mcswell 3 days ago | parent | prev | next [-]

I don't see the relevance of that argument (which other responders to your post have pointed out as Searle's Chinese Room argument). The pen and paper are of course not doing any thinking, but then the pen isn't doing any writing on its own, either. It's the system of pen + paper + human that's doing the thinking.

frozenlettuce 3 days ago | parent [-]

The idea of my argument is that I notice that people project some "ethereal" properties over computations that happen in the... computer. Probably because electricity is involved, making things show up as "magic" from our point of view, making it easier to project consciousness or thinking onto the device. The cloud makes that even more abstract. But if you are aware that the transistors are just a medium that replicates what we already did for ages with knots, fingers, and paint, it gets easier to see them as plain objects. Even the resulting artifacts that the machine produces are only something meaningful from our point of view, because you need prior knowledge to read the output signals. So yeah, those devices end up being an extension of ourselves.

hackinthebochs 3 days ago | parent [-]

Your view is missing the forest for the trees. You see individual objects but miss the aggregate whole. You have a hard time conceiving of how exotic computers can be conscious because we are scale chauvinists by design. Our minds engage with the world on certain time and length scales, and so we naturally conceptualize our world based on entities that exist on those scales. But computing is necessarily scale independent. It doesn't matter to the computation if it is running on some 100GHz substrate or .0001Hz. It doesn't matter if its running on a CPU chip the size of a quarter or spread out over the entire planet. Computation is about how information is transformed in semantically meaningful ways. Scale just doesn't matter.

If you were a mind supervening on the behavior of some massive time/space scale computer, how would you know? How could you tell the difference between running on a human making marks with pen and paper and running on a modern CPU? Your experience updates based on information transformations, not based on how fast the fundamental substrate is changing. When your conscious experience changes, that means your current state is substantially different from your prior state and you can recognize this difference. Our human-scale chauvinism gets in the way of properly imagining this. A mind running on a CPU or a large collection of human computers is equally plausible.

A common question people like to ask is "where is the consciousness" in such a system. This is an important question if only because it highlights the futility of such questions. Where is Microsoft Word when it is running on my computer? How can you draw a boundary around a computation when there are a multitude of essential and non-essential parts of the system that work together to construct the relevant causal dynamic. It's just not a well-defined question. There is no one place where Microsoft Word occurs nor is there any one place where consciousness occurs in a system. Is state being properly recorded and correctly leveraged to compute the next state? The consciousness is in this process.

mcswell a day ago | parent [-]

"'where is the consciousness' in such a system": One could ask the same of humans: where is the consciousness? The modern answer is (somewhere) in the brain, and I admit that's likely true. But we have no proof--no evidence, really--that our consciousness is not in some other dimension, and our brains could be receiving different kinds of signals from our souls in that other dimension, like TV sets receive audio and video signals from an old fashioned broadcast TV station.

hackinthebochs 12 hours ago | parent [-]

This brain-receiver idea just isn't a very good theory. For one it increases the complexity of the model without any corresponding increase in explanatory power. The mystery of consciousness remains, except now you have all this extra mechanism involved.

Another issue is that the brain is overly complex for consciousness to just be received from elsewhere. Typically a radio is much less complex than the signal being received, or at least less complex than the potential space of signals it is possible to receive. We don't see that with consciousness. In fact, consciousness seems to be far less complex than the brain that supports it. The issue of the specificity of brain damage and the corresponding specificity in conscious deficits also points away from the receiver idea.

BobbyJo 3 days ago | parent | prev | next [-]

If you put a droplet of water in a warm bowl every 12 hours, the bowl will remain empty as the water will evaporate. That does not mean that if you put a trillion droplets in every twelve hours it will still remain empty.

SiempreViernes 3 days ago | parent [-]

It will also not be empty if I put the bowl in the sea, which is a remark about the nature of thoughthat that proves exactly as much as your comment.

BobbyJo 3 days ago | parent [-]

The point I was trying to make was that the time you use to perform the calculation may change whether there is an "experience" on behalf of the calculation. Without specifying the basis if subjectivity, you can't rule anything out as far as what matters and what doesn't. Maybe the speed or locality with which the calculations happen matters. Like the water drops: given the same amount of time, eventually all the water will evaporate in either case leading to the same end state, but the the intermediate states are very different.

Wowfunhappy 3 days ago | parent | prev | next [-]

https://xkcd.com/505/

You can replicate the entire universe with pen and paper (or a bunch of rocks). It would take an unimaginably long time, and we haven't discovered all the calculations you'd need to do yet, but presumably they exist and this could be done.

Does that actually make a universe? I don't know!

The comic is meant to be a joke, I think, but I find myself thinking about it all the time!!!

frozenlettuce 3 days ago | parent [-]

Even worse, as we are part of the universe, we would need to simulate ourselves and the very simulation that we are creating. You would also need to replicate the simulation of the simulation, leading to an eternal loop that would demand infinite matter and time (and would still not be enough!). Probably, you can't simulate something while being part of it.

Wowfunhappy 3 days ago | parent [-]

It doesn’t need to be our universe, just a universe.

The question is, are the people in the simulated universe real people? Do they think and feel like we do—are they conscious? Either answer seems like it can’t possibly be right!

thrance 3 days ago | parent | prev | next [-]

You're arguing against Functionalism [0], of which I'd encourage you to at least read the Wikipedia page. Why would doing the brain's computations on pen and paper rather than on wetware lead to different outcomes? And how?

Connect your pen and paper operator to a brainless human body, and you got something indistinguishable from a regular alive human.

[0] https://en.wikipedia.org/wiki/Functionalism_%28philosophy_of...

umanwizard 3 days ago | parent | prev [-]

You can simulate a human brain on pen and paper too.

palmotea 3 days ago | parent | next [-]

> You can simulate a human brain on pen and paper too.

That's an assumption, though. A plausible assumption, but still an assumption.

We know you can execute an LLM on pen and paper, because people built them and they're understood well enough that we could list the calculations you'd need to do. We don't know enough about the human brain to create a similar list, so I don't think you can reasonably make a stronger statement than "you could probably simulate..." without getting ahead of yourself.

terminalshort 3 days ago | parent | next [-]

I can make a claim much stronger than "you could probably" The counterclaim here is that the brain may not obey physical laws that can be described by mathematics. This is a "5G causes covid" level claim. The overwhelming burden of proof is on you.

frozenlettuce 3 days ago | parent | next [-]

There are some quantum effects in the brain (for some people, that's a possible source of consciousness). We can simulate quantum effects, but here comes the tricky part: even if our simulation matches the probability, say 70/30 of something happening, what guarantees that our simulation would take the same path as the object being simulated?

daedrdev 3 days ago | parent | next [-]

We don't have to match the quantum state since the brain still produces an valid output regardless of what each random quantum probability ended up as. And we can include random entropy in a LLM too.

terminalshort 3 days ago | parent | prev [-]

This is just non-determinism. Not only can't your simulation reproduce the exact output, but neither can your brain reproduce its own previous state. This doesn't mean it's a fundamentally different system.

kipchak 3 days ago | parent | prev [-]

Consider for example Orch OR theory. If it or something like it were to be accurate, the brain would not "obey physical laws that can be described by mathematics".

bondarchuk 3 days ago | parent | next [-]

>Consider for example Orch OR theory

Yes, or what about leprechauns?

kipchak 3 days ago | parent [-]

Orch OR is probably wrong, but the broader point is that we still don’t know which physical processes are necessary for cognition. Until we do, claims of definitive brain simulability are premature.

DoctorOetker 3 days ago | parent | prev [-]

the transition probability matrices don't obey the laws of statistics?

hnfong 3 days ago | parent | prev [-]

This is basically the Church-Turing thesis and one of the motivations of using tape(paper) and an arbitrary alphabet in the Turing machine model.

It's been kinda discussed to oblivion in the last century, interesting that it seems people don't realize the "existing literature" and repeat the same arguments (not saying anyone is wrong).

phantasmish 3 days ago | parent | prev | next [-]

The simulation isn't an operating brain. It's a description of one. What it "means" is imposed by us, what it actually is, is a shitload of graphite marks on paper or relays flipping around or rocks on sand or (pick your medium).

An arbitrarily-perfect simulation of a burning candle will never, ever melt wax.

An LLM is always a description. An LLM operating on a computer is identical to a description of it operating on paper (if much faster).

gnull 3 days ago | parent | next [-]

What makes the simulation we live in special compared to the simulation of a burning candle that you or I might be running?

That simulated candle is perfectly melting wax in its own simulation. Duh, it won't melt any in ours, because our arbitrary notions of "real" wax are disconnected between the two simulatons.

hnfong 3 days ago | parent [-]

They do have a valid subtle point though.

If we don't think the candle in a simulated universe is a "real candle", why do we consider the intelligence in a simulated universe possibly "real intelligence"?

Being a functionalist ( https://en.wikipedia.org/wiki/Functionalism_(philosophy_of_m... ) myself, I don't know the answer on the top of my head.

hackinthebochs 3 days ago | parent | next [-]

>If we don't think the candle in a simulated universe is a "real candle", why do we consider the intelligence in a simulated universe possibly "real intelligence"?

I can smell a "real" candle, a "real" candle can burn my hand. The term real here is just picking out a conceptual schema where its objects can feature as relata of the same laws, like a causal compatibility class defined by a shared causal scope. But this isn't unique to the question of real vs simulated. There are causal scopes all over the place. Subatomic particles are a scope. I, as a particular collection of atoms, am not causally compatible with individual electrons and neutrons. Different conceptual levels have their own causal scopes and their own laws (derivative of more fundamental laws) that determine how these aggregates behave. Real (as distinct from simulated) just identifies causal scopes that are derivative of our privileged scope.

Consciousness is not like the candle because everyone's consciousness is its own unique causal scope. There are psychological laws that determine how we process and respond to information. But each of our minds are causally isolated from one another. We can only know of each other's consciousness by judging behavior. There's nothing privileged about a biological substrate when it comes to determining "real" consciousness.

hnfong 3 days ago | parent [-]

Right, but doesn't your argument imply that the only "real" consciousness is mine?

I'm not against this conclusion ( https://en.wikipedia.org/wiki/Philosophical_zombie ) but it doesn't seem to be compatible with what most people believe in general.

hackinthebochs 3 days ago | parent | next [-]

That's a fair reading but not what I was going for. I'm trying to argue for the irrelevance of causal scope when it comes to determining realness for consciousness. We are right to privilege non-virtual existence when it comes to things whose essential nature is to interact with our physical selves. But since no other consciousness directly physically interacts with ours, it being "real" (as in physically grounded in a compatible causal scope) is not an essential part of its existence.

Determining what is real by judging causal scope is generally successful but it misleads in the case of consciousness.

hnfong 2 days ago | parent [-]

I don't think causal scope is what makes a virtual candle virtual.

If I make a button that lights the candle, and another button that puts it off, and I press those buttons, then the virtual candle is causally connected to our physical reality world.

But obviously the candle is still considered virtual.

Maybe a candle is not as illustrative, but let's say we're talking about a very realistic and immersive MMORPG. We directly do stuff in the game, and with the right VR hardware it might even feel real, but we call it a virtual reality anyway. Why? And if there's an AI NPC, we say that the NPC's body is virtual -- but when we talk about the AI's intelligence (which at this point is the only AI we know about -- simulated intelligence in computers) why do we not automatically think of this intelligence as virtual in the same way as a virtual candle or a virtual NPC's body?

hackinthebochs 2 days ago | parent [-]

Yes, causal scope isn't what makes it virtual. It's what makes us say it's not real. The real/virtual dichotomy is what I'm attacking. We treat virtual as the opposite of real, therefore a virtual consciousness is not real consciousness. But this inference is specious. We mistake the causal scope issue for the issue of realness. We say the virtual candle isn't real because it can't burn our hand. What I'm saying is that, actually the virtual candle can't burn our hand because of the disjoint causal scope. But the causal scope doesn't determine what is real, it just determines the space and limitations of potential causal interactions.

Real is about an object having all of the essential properties for that concept. If we take it as essential that candles can burn our hand, then the virtual candle isn't real. But it is not essential to consciousness that it is not virtual.

grantcas a day ago | parent | prev [-]

[dead]

BobbyJo 3 days ago | parent | prev | next [-]

> If we don't think the candle in a simulated universe is a "real candle", why do we consider the intelligence in a simulated universe possibly "real intelligence"?

A candle in Canada can't melt wax in Mexico, and a real candle can't melt simulated wax. If you want to differentiate two things along one axis, you can't just point out differences that may or may not have any effect on that axis. You have to establish a causal link before the differences have any meaning. To my knowledge, intelligence/consciousness/experience doesn't have a causal link with anything.

We know our brains cause consciousness the way we knew in 1500 that being on a boat for too long causes scurvy. Maybe the boat and the ocean matter, or maybe they don't.

phantasmish 3 days ago | parent | prev [-]

I think the core trouble is that it's rather difficult to simulate anything at all without requiring a human in the loop before it "works". The simulation isn't anything (well, it's something, but it's definitely not what it's simulating) until we impose that meaning on it. (We could, of course, levy a similar accusation at reality, but folks tend to avoid that because it gets uselessly solipsistic in a hurry)

A simulation of a tree growing (say) is a lot more like the idea of love than it is... a real tree growing. Making the simulation more accurate changes that not a bit.

penteract 3 days ago | parent | prev | next [-]

I believe that the important part of a brain is the computation it's carrying out. I would call this computation thinking and say it's responsible for consciousness. I think we agree that this computation would be identical if it were simulated on a computer or paper. If you pushed me on what exactly it means for a computation to physically happen and create consciousness, I would have to move to statements I'd call dubious conjectures rather than beliefs - your points in other threads about relying on interpretation have made me think more carefully about this.

Thanks for stating your views clearly. I have some questions to try and understand them better:

Would you say you're sure that you aren't in a simulation while acknowledging that a simulated version of you would say the same?

What do you think happens to someone whose neurons get replaced by small computers one by one (if you're happy to assume for the sake of argument that such a thing is possible without changing the person's behavior)?

cibyr 3 days ago | parent | prev | next [-]

It seems to me that the distinction becomes irrelevant as soon as you connect inputs and outputs to the real world. You wouldn't say that a 737 autopilot can never, ever fly a real jet and yet it behaves exactly the same whether it's up in the sky or hooked up to recorded/simulated signals on a test bench.

amelius 3 days ago | parent | prev | next [-]

Here is a thought experiment:

Build a simulation of creatures that evolve from simple structures (think RNA, DNA).

Now, if in this simulation, after many many iterations, the creatures start talking about consciousness, what does that tell us?

amelius 3 days ago | parent | prev [-]

> An arbitrarily-perfect simulation of a burning candle will never, ever melt wax.

It might if the simulation includes humans observing the candle.

andrepd 3 days ago | parent | prev | next [-]

It's an open problem whether you can or not.

space_fountain 3 days ago | parent [-]

It’s not that open. We can simulate smaller system of neurons just fine, we can simulate chemistry. There might be something beyond that in our brains for some reason, but it sees doubtful right now

phantasmish 3 days ago | parent | next [-]

Our brains actually do something, may be the difference. They're a thing happening, not a description of a thing happening.

Whatever that something that it actually does in the real, physical world is produces the cogito in cogito, ergo sum and I doubt you can get it just by describing what all the subatomic particles are doing, any more than a computer or pen-and-paper simulated hurricane can knock your house down, no matter how perfectly simulated.

thrance 3 days ago | parent | next [-]

You're arguing for the existence of a soul, for dualism. Nothing wrong with that, except we have never been able to measure it, and have never had to use it to explain any phenomenon of the brain's working. The brain follows the rules of physics, like any other objects of the material world.

A pen and paper simulation of a brain would also be "a thing happening" as you put it. You have to explain what is the magical ingredient that makes the brain's computations impossible to replicate.

You could connect your brain simulation to an actual body, and you'd be unable to tell the difference with a regular human, unless you crack it open.

phantasmish 3 days ago | parent [-]

> You're arguing for the existence of a soul, for dualism.

I'm not. You might want me to be, but I'm very, very much not.

ehsanu1 3 days ago | parent | prev | next [-]

Doing something merely requires I/O. Brains wouldn't be doing much without that. A sufficiently accurate simulation of a fundamentally computational process is really just the same process.

terminalshort 3 days ago | parent | prev [-]

Why are the electric currents moving in a GPU any less of a "thing happening" than the firing of the neurons in your brain? What you are describing here is a claim that the brain is fundamentally supernatural.

phantasmish 3 days ago | parent [-]

Thinking that making scribbles that we interpret(!!!) as perfectly describing a functioning consciousness and its operation, on a huge stack of paper, would manifest consciousness in any way whatsoever (hell, let's say we make it an automated flip-book, too, so it "does something"), but if you made the scribbles slightly different it wouldn't work(!?!? why, exactly, not ?!?!), is what's fundamentally supernatural. It's straight-up Bronze Age religion kinds of stuff (which fits—the tech elite is full of that kind of shit, like mummification—er, I mean—"cryogenic preservation", millenarian cults er, I mean The Singularity, et c)

Of course a GPU involves things happening. No amount of using it to describe a brain operating gets you an operating brain, though. It's not doing what a brain does. It's describing it.

(I think this is actually all somewhat tangential to whether LLMs "can think" or whatever, though—but the "well of course they might think because if we could perfectly describe an operating brain, that would also be thinking" line of argument often comes up, and I think it's about as wrong-headed as a thing can possibly be, a kind of deep "confusing the map for the territory" error; see also comments floating around this thread offhandedly claiming that the brain "is just physics"—like, what? That's the cart leading the horse! No! Dead wrong!)

hackinthebochs 3 days ago | parent [-]

Computation doesn't care about its substrate. A simulation of a computation is just a computation.

3 days ago | parent | prev [-]
[deleted]
pton_xd 3 days ago | parent | prev | next [-]

So the brain is a mathematical artifact that operates independently from time? It just happens to be implemented using physics? Somehow I doubt it.

thrance 3 days ago | parent [-]

The brain follows the laws of physics. The laws of physics can be closely approximated by mathematical models. Thus, the brain can be closely approximated by mathematical models.

an0malous 3 days ago | parent | prev [-]

Parent said replicate, as in deterministically