| ▲ | pksebben 2 days ago | |||||||
I feel like despite the close analysis you grant to the meanings of formalization and syntactic, you've glossed over some more fundamental definitions that are sort of pivotal to the argument at hand. > LLMs do not reason. They do not infer. They do not analyze. (definitions from Oxford Languages) reason(v): think, understand, and form judgments by a process of logic. to avoid being circular, I'm willing to write this one off because of the 'think' and 'understand', as those are the root of the question here. However, forming a judgement by a process of logic is precisely what these LLMs do, and we can see that clearly in chain-of-logic LLM processes. infer(v): deduce or conclude (information) from evidence and reasoning rather than from explicit statements. Again, we run the risk of circular logic because of the use of 'reason'. An LLM is for sure using evidence to get to conclusions, however. analyze(v): examine methodically and in detail the constitution or structure of (something, especially information), typically for purposes of explanation and interpretation. This one I'm willing to go to bat for completely. I have seen LLM do this, precisely according to the definition above. For those looking for the link to the above definitions - they're the snippets google provides when searching for "SOMETHING definition". They're a non-paywalled version of OED definitions. Philosophically I would argue that it's impossible to know what these processes look like in the human mind, and so creating an equivalency (positive or negative) is an exercise in futility. We do not know what a human memory looks like, we do not know what a human thought looks like, we only know what the output of these things looks like. So the only real metric we have for an apples-to-apples comparison is the appearance of thought, not the substance of the thing itself. That said, there are perceptible differences between the output of a human thought and what is produced by an LLM. These differences are shrinking, and there will come a point where we can no longer distinguish machine thinking and human thinking anymore (perhaps it won't be an LLM doing it, but some model of some kind will). I would argue that at that point the difference is academic at best. Say we figure out how to have these models teach themselves and glean new information from their interactions. Say we also grant them directives to protect themselves and multiply. At what point do we say that the distinction between the image of man and man itself is moot? | ||||||||
| ▲ | lo_zamoyski 20 hours ago | parent [-] | |||||||
> forming a judgement by a process of logic is precisely what these LLMs do, and we can see that clearly in chain-of-logic LLM processes I don't know how you arrived at that conclusion. This is no mystery. LLMs work by making statistical predictions, and even the word "prediction" is loaded here. This is not inference. We cannot clearly see it is doing inference, as inference is not observable. What we observe is the product of a process that has a resemblance to the products of human reasoning. Your claim is effectively behaviorist. > An LLM is for sure using evidence to get to conclusions, however. Again, the certainty. No, it isn't "for sure". It is neither using evidence nor reasoning, for the reasons I gave. These presuppose intentionality, which is excluded by Turing machines and equivalent models. > [w.r.t. "analyze"] I have seen LLM do this, precisely according to the definition above. Again, you have not seen an LLM do this. You have seen an LLM produce output that might resemble this. Analysis likewise presupposes intentionality, because it involves breaking down concepts, and concepts are the very locus of intentionality. Without concepts, you don't get analysis. I cannot understate the centrality of concepts to intelligence. They're more important than inference and indeed presupposed by inference. > Philosophically I would argue that it's impossible to know what these processes look like in the human mind, and so creating an equivalency (positive or negative) is an exercise in futility. That's not a philosophical claim. It's a neuroscientific one that insists that the answer must be phrased in neuroscientific terms. Philosophically, we don't even need to know the mechanisms or processes or causes of human intelligence to know that the heart of human intelligence is intentionality. It's implicit in the definition of what intelligence is! If you deny intentionality, you subject yourself to a dizzying array of incoherence, beginning with the self-refuting consequence that you could not be making this argument against intentionality in the first place without intentionality. > At what point do we say that the distinction between the image of man and man itself is moot? Whether something is moot depends on the aim. What is your aim? If you aim is theoretical, which is to say the truth for its own sake, and to know whether something is A or something is B and whether A is B, then it is never moot. If your aim is practical and scoped, if you want some instrument that has utility indistinguishable from or superior to that of a human being in the desired effects that it produces, then sure, maybe the question is moot in that case. I don't care if my computer was fabricated by a machine or a human being. I care about the quality of the computer. But then, in the latter case, you're not really asking whether there is a distinction between man and the image of man (which, btw, already makes the distinction that for some reason you want to forget or deny, as the image of a thing is never the same as the thing). So I don't really understand the question. The use of the word "moot" seems like a category mistake here. Besides, the ability to distinguish two things is an epistemic question, not an ontological one. | ||||||||
| ||||||||