Remix.run Logo
jibal 4 days ago

If having a thought is strictly a consequence of physical processes, physical occurrences, then physicalism is true, so of course it is not "absolutely impossible", even if it were to turn out that it's not true. By token-identity -- which is one but not the only possible model -- the brain being in some specific physical state is synonymous with having some specific thought--that's all ... the brain being in that specific state is coincident with having that specific thought. Word games about "reference" don't change that. The language we use to talk about brain states is very different from the language we use to talk about thoughts because they are very different conceptual frameworks for describing what happen to be the same occurrence. We describe thoughts as being about things, not in terms of activation levels, synapses firing, etc., and we talk about brain states in terms the latter, not in terms of being about tigers etc., but that doesn't mean that these totally different sorts of descriptions aren't about the same physical occurrence. When you have a specific thought about a tiger, your brain is in a specific configuration, and if it weren't then you wouldn't be having that specific thought. That's what token identity means ... each mental state corresponds directly to a physical state of the brain.

> Physicalism says the word "thought" and the phrase "a particular stimulation of neural fibers" refer to the same thing (from document above)

Here is what it actually says:

> The identity-thesis is a version of physicalism: it holds that all mental states and events are in fact physical states and events. But it is not, of course, a thesis about meaning: it does not claim that words such as ‘pain’ and ‘after-image’ may be analyzed or defined in terms of descriptions of brain-processes. (That would be absurd.) Rather, it is an empirical thesis about the things in the world to which our words refer: it holds that the ways of thinking represented by our terms for conscious states, and the ways of thinking represented by some of our terms for brain-states, are in fact different ways of thinking of the very same (physical) states and events. So ‘pain’ doesn’t mean ‘such-and-such a stimulation of the neural fibers’ (just as ‘lightning’ doesn’t mean ‘such-and-such a discharge of electricity’); yet, for all that, the two terms in fact refer to the very same thing.

And yet the sort of analysis that points out as absurd is exactly the sort of analysis you are attempting.

> You cannot analyze a neural state with a brain scan and find a reference to a tiger. You will see a bunch of chemical and electrical states, nothing more. You will not see the object of the thought.

Says who? Of course we don't currently don't have such technology, but at some time in the future we may be able to analyze a brain scan and determine that the subject is thinking of a tiger. (This may well turn out not to be feasible if only token-identity holds but not type-identity ... thoughts about similar things need not correspond to similar brain states.)

Saying that we only see a bunch of chemical and electrical states is the most absurd naive reductivist denial of inference possible. When we look at a spectrogram, all we see is colored lines, yet we are able to infer what substances produced them. When we look at an oscilloscope, we will see a bunch of curves. etc. Or take the examples at the beginning of the paper ... "a particular cloud is, as a matter of fact, a great many water droplets suspended close together in the atmosphere; and just as a flash of lightning is, as a matter of fact, a certain sort of discharge of electrical energy" -- these are different levels and frameworks of description. Look at a photograph or a computer screen up close and you will see pixels or chemical arrangements. To say that you will see "nothing more" is to deny the entirety of science and rational thought. One can just as well talk about windows, titles, bar charts, this comment on a computer screen as referring to things but the pixel states of the screens that are coincident with them don't and thereby foolishly, absurdly, think that one has defeated physicalism

Enough with the terrible arguments and shoddy thinking. You're welcome to them ... I reject them.

Over and out.

geye1234 2 days ago | parent [-]

I agree that when I see the word "tiger" on paper and a screen, I am looking at something that refers to a tiger. And the same is true, mutatis mutandis, for spectograms, oscilliscopes, photos on screens, etc etc. But the word "tiger" only refers to a tiger because a mind make the connection between the word and the thing in my mind. Without that connection, it really is just a bunch of ink on a page. There is nothing inherent in the shape of the ink's squiggles that make it refer to a tiger.

It is not a question of "different levels and frameworks" of description in this case, or at least not solely. The examples of clouds and lightning given in the paper are not valid parallels (I know the paper doesn't offer them as parallels, but your comment did), because they do not need relations with other things to be water droplets and discharges of electricity. But a word needs a relation with something else (its referent) to be a word, otherwise it is just meaningless squiggles, sounds or pixels. And the word does not have this relation in and of itself: only a mind can give it this relation.

(You can take relation and reference as largely synonymous for the purposes of this comment.)

You can analyze the squiggles as closely as you like, but you will still not find any relation to a tiger, unless you have something else (a mind) giving it that relation. And again, the same is true for the other examples you give. Extrinsic relations exist between word and thing or oscilliscope and wave, but not intrinsic ones.

In the same way, the brain's state when it thinks of a tiger is, in and of itself, a bunch of chemical and electric states that bear no intrisic relation to a tiger. No amount of analysis of the brain's state will change this. As I stated somewhere else, a tiger and a brain state, like a tiger and the word "tiger", are two entirely different things, and are not intrinsically related to each other. You can analyze either the tiger or the brain state with whatever sophisticated technology you want, but that will not change this fact. Analyzing a bunch of squiggles will produce information about the ink, but not information about a tiger: you are still looking at ink. Analyzing chemical and electric states will produce interesting and very valuable information about the brain, but not information about a tiger: you are still looking at chemical and electric states. No amount of searching will find intrinsic relation between brain-state and thing. [I think this is also a good argument against Cartesian dualism, but that's beside the point right now.]

The relation between thought and its object must be intrinsic (assuming our thoughts are about, or can be about, reality). It cannot be extrinsic like a word or oscilliscope, because our thoughts are not given their meaning by something outside our minds. (I assume we agree on this last point after "because", and it doesn't need arguing.) Our thoughts' relations to their objects must be intrinsic to the thoughts. But they can't be intrinsic if our thoughts are our brain states, for the reason just given.

(The UMD paper responds to this objection in section 3.8: briefly, my response is that we might get the illusion of intentionality in a physical system like a computer, but no more than that.)