Remix.run Logo
geye1234 2 days ago

I agree that when I see the word "tiger" on paper and a screen, I am looking at something that refers to a tiger. And the same is true, mutatis mutandis, for spectograms, oscilliscopes, photos on screens, etc etc. But the word "tiger" only refers to a tiger because a mind make the connection between the word and the thing in my mind. Without that connection, it really is just a bunch of ink on a page. There is nothing inherent in the shape of the ink's squiggles that make it refer to a tiger.

It is not a question of "different levels and frameworks" of description in this case, or at least not solely. The examples of clouds and lightning given in the paper are not valid parallels (I know the paper doesn't offer them as parallels, but your comment did), because they do not need relations with other things to be water droplets and discharges of electricity. But a word needs a relation with something else (its referent) to be a word, otherwise it is just meaningless squiggles, sounds or pixels. And the word does not have this relation in and of itself: only a mind can give it this relation.

(You can take relation and reference as largely synonymous for the purposes of this comment.)

You can analyze the squiggles as closely as you like, but you will still not find any relation to a tiger, unless you have something else (a mind) giving it that relation. And again, the same is true for the other examples you give. Extrinsic relations exist between word and thing or oscilliscope and wave, but not intrinsic ones.

In the same way, the brain's state when it thinks of a tiger is, in and of itself, a bunch of chemical and electric states that bear no intrisic relation to a tiger. No amount of analysis of the brain's state will change this. As I stated somewhere else, a tiger and a brain state, like a tiger and the word "tiger", are two entirely different things, and are not intrinsically related to each other. You can analyze either the tiger or the brain state with whatever sophisticated technology you want, but that will not change this fact. Analyzing a bunch of squiggles will produce information about the ink, but not information about a tiger: you are still looking at ink. Analyzing chemical and electric states will produce interesting and very valuable information about the brain, but not information about a tiger: you are still looking at chemical and electric states. No amount of searching will find intrinsic relation between brain-state and thing. [I think this is also a good argument against Cartesian dualism, but that's beside the point right now.]

The relation between thought and its object must be intrinsic (assuming our thoughts are about, or can be about, reality). It cannot be extrinsic like a word or oscilliscope, because our thoughts are not given their meaning by something outside our minds. (I assume we agree on this last point after "because", and it doesn't need arguing.) Our thoughts' relations to their objects must be intrinsic to the thoughts. But they can't be intrinsic if our thoughts are our brain states, for the reason just given.

(The UMD paper responds to this objection in section 3.8: briefly, my response is that we might get the illusion of intentionality in a physical system like a computer, but no more than that.)