Remix.run Logo
metalcrow 8 hours ago

I've attempted desperately to understand this paper after thoroughly reading it and have made 0 progress. Can anyone who does understand it attempt to explain?

Currently my understanding is that this paper is claiming that "concepts" are a fundamental building block of experience (which relates to consciousness), and can only be built by a mapmaker which is something that directly converts continuous physical phenomena into discrete tokens. But I couldn't get further into how that related to consciousness.

EDIT: the paper seems to be assuming that something simulating a mapmaker, or the process of doing it, can by nature not be a mapmaker since performing alphabetization is inherently something that must be "instantiated". How do they confirm if something is doing simulation vs if it's actually instantiating it? How can you tell the difference? They say how, much like simulating photosynthesis will not produce glucose, simulating mapmaking won't produce concepts. But you can't measure concepts, they're intangible, so you can't differentiate simulating mapmaking vs a real mapmaker.

GMoromisato 8 hours ago | parent | next [-]

It starts by saying that a simulation of something is not the real thing. A simulation of a hurricane is not a hurricane. That's certainly true and even obvious.

Then they say that current AI is just a simulation of consciousness and therefore is not real consciousness. Moreover, it can never be real consciousness because it is just a simulation.

But that's a circular argument: they are defining AI as a simulation. But what if AI is not a simulation of consciousness but actual consciousness? They don't offer any argument for why that's impossible.

ribosometronome 8 hours ago | parent | next [-]

>A simulation of a hurricane is not a hurricane

If we simulated a hurricane by somehow inducing a rotating, organized system of clouds and thunderstorms over warm tropical waters with wind speeds over 75+ mph, the difference could end up being fairly unimportant to those in the simulation's path.

Computer simulations of hurricanes obviously lack those important properties of what makes something a hurricane. I'm not so sure that the same would apply to something as abstract and difficult to define as consciousness.

GMoromisato 6 hours ago | parent [-]

Agreed! The paper is not explicit about how to distinguish between a simulation and the real thing, and that's how it gets into trouble.

With consciousness, the extra difficulty is that we can't distinguish via observable evidence. With a hurricane, we can measure wind-speed and track insurance claims to distinguish between simulation and the real thing. How do we do that with consciousness? What is the observable effect of consciousness?

mannykannot 7 hours ago | parent | prev | next [-]

On the other hand, an accurate digital simulation of a mechanical calculator really does calculate. The "a simulation is not the real thing" objection breaks down when the function is information processing, on account of information's substrate independence.

metalcrow 8 hours ago | parent | prev | next [-]

Yep that's about what i managed to get out of it as well. If you define AI as a simulation of a mapmaker, it can't be a real mapmaker. But they are never able to prove that it IS only a simulation, instead of an actual mapmaker.

dgellow 6 hours ago | parent | prev | next [-]

It’s a simulation of language, not consciousness. Though the problem you mention is pretty much the same

RC_ITR 6 hours ago | parent | prev | next [-]

We invented a word for a very specific thing (consciousness) and are now debating whether that relatively unimportant word represents a large open set or a narrow closed set.

We do one thing in our bodies with relatively binary nervous system and a fundamentally continuous endocrine system. That's clearly and unanimously consciousness. We also, however, see other animals with similar set-ups but less capabilities, so we understand it exists on a spectrum.

We separately invented a thing that gets to similar outcomes with fundamentally binary logic gates.

Our minds are drawn to comparison and classification, so we fight over how similar or different those two things are in a way that often feels unsatisfactory because in order to meaningfully compare the two, we have to reduce them in a way that feels like its underselling either/both.

CamperBob2 8 hours ago | parent | prev | next [-]

Also, since there's no way to prove that we're not entities in a simulation of something else, the argument runs out of steam in the opposite direction as well.

Rekindle8090 an hour ago | parent | prev [-]

[dead]

jstanley 8 hours ago | parent | prev | next [-]

They're defining consciousness ("mapmaker") to exist outside the AI, and then showing that AI can't meet their definition of consciousness.

jsdalton 8 hours ago | parent [-]

Yes, and it immediately called to mind for me the phrase “the map is not the territory.”

Put another way: no matter how detailed or “perfect” you make a map, it will never be the territory, ie the thing that is mapped.

Computers and AI are like a map in this regard —- just ones and zeros that we have assigned meaning to arbitrarily. No matter how “good” AI gets, it’s still just a map of the thing not the thing itself.

So AI saying “I feel sad” is never more than a representation of sadness that should not be confused with the subjective experience of sadness itself.

bee_rider 7 hours ago | parent [-]

If you make a big enough map you can fly it over and drop it on the territory I guess. Then does it become the territory?

josefritzishere 4 hours ago | parent [-]

According to the paper, no.

ReadEvalPost 7 hours ago | parent | prev | next [-]

I've tried to explain this paper to people in similar circumstances and have also struggled!

In my mind the key point of departure between this paper and the more standard computational functionalist approaches is the importance of metabolism. Metabolism _precedes_ organism. The body is first deeply entangled with the environment through exchanges of resources (content causality) before it is capable of building computers (vehicle causality). Having built and alphabetized the world we can understand them in terms of discrete state transitions.

I expect my explanations have been unsatisfying as we can immediately move to seeing metabolism as some alphabetized input/output system that can be immediately placed back into the computational framework. Moving outside of this framework requires engaging with the enactivist/organicist traditions, which is a rich but minority view.

harpiaharpyja 8 hours ago | parent | prev | next [-]

I'm only partway through, but I believe one of the foundational blocks is that computation is fundamentally an interpretation of physical events, not something that can just exist by itself.

renticulous 7 hours ago | parent | prev | next [-]

Currently out understanding of living systems is that they have to inhabit the body. What if tomorrow we find Alien race which is like drone operator operating a drone somewhat like Navi controlling other other animals but wireless. Would we change our definition of consciousness if brain (command and control centre) and body (physical execution) are distinct systems? This argument was stated by Daniel Dennett

soco 5 hours ago | parent | prev [-]

"ceci n'est pas une pipe" - a century old argument which still holds.