| ▲ | lo_zamoyski 20 hours ago | |
> forming a judgement by a process of logic is precisely what these LLMs do, and we can see that clearly in chain-of-logic LLM processes I don't know how you arrived at that conclusion. This is no mystery. LLMs work by making statistical predictions, and even the word "prediction" is loaded here. This is not inference. We cannot clearly see it is doing inference, as inference is not observable. What we observe is the product of a process that has a resemblance to the products of human reasoning. Your claim is effectively behaviorist. > An LLM is for sure using evidence to get to conclusions, however. Again, the certainty. No, it isn't "for sure". It is neither using evidence nor reasoning, for the reasons I gave. These presuppose intentionality, which is excluded by Turing machines and equivalent models. > [w.r.t. "analyze"] I have seen LLM do this, precisely according to the definition above. Again, you have not seen an LLM do this. You have seen an LLM produce output that might resemble this. Analysis likewise presupposes intentionality, because it involves breaking down concepts, and concepts are the very locus of intentionality. Without concepts, you don't get analysis. I cannot understate the centrality of concepts to intelligence. They're more important than inference and indeed presupposed by inference. > Philosophically I would argue that it's impossible to know what these processes look like in the human mind, and so creating an equivalency (positive or negative) is an exercise in futility. That's not a philosophical claim. It's a neuroscientific one that insists that the answer must be phrased in neuroscientific terms. Philosophically, we don't even need to know the mechanisms or processes or causes of human intelligence to know that the heart of human intelligence is intentionality. It's implicit in the definition of what intelligence is! If you deny intentionality, you subject yourself to a dizzying array of incoherence, beginning with the self-refuting consequence that you could not be making this argument against intentionality in the first place without intentionality. > At what point do we say that the distinction between the image of man and man itself is moot? Whether something is moot depends on the aim. What is your aim? If you aim is theoretical, which is to say the truth for its own sake, and to know whether something is A or something is B and whether A is B, then it is never moot. If your aim is practical and scoped, if you want some instrument that has utility indistinguishable from or superior to that of a human being in the desired effects that it produces, then sure, maybe the question is moot in that case. I don't care if my computer was fabricated by a machine or a human being. I care about the quality of the computer. But then, in the latter case, you're not really asking whether there is a distinction between man and the image of man (which, btw, already makes the distinction that for some reason you want to forget or deny, as the image of a thing is never the same as the thing). So I don't really understand the question. The use of the word "moot" seems like a category mistake here. Besides, the ability to distinguish two things is an epistemic question, not an ontological one. | ||
| ▲ | pksebben 17 hours ago | parent [-] | |
Forming a judgement does not require that the internal process look like anything in particular, though. Nor does logic. What makes logic powerful is precisely that it is abstracted from the process that creates it - it is a formula that can be defined. I ask the LLM to do some or another assessment. The LLM prints out the chain-of-thought (whether that moniker is accurate is academic - we can read the chain and see that at the very least, it follows a form recognizable as logic). At the end of the chain-of-thought, we are left with a final conclusion that the model has come to - a judgement. Whether the internal state of the machine looks anything like our own is irrelevant to these definitions, much like writing out a formalism (if A then B, if B then C, A implies C). Those symbols do not have any form save for the shape of them, but when used in accordance with the rules we have laid out regarding logic, they have meaning nonetheless. I'd similarly push back against the idea that the LLM isn't using evidence - I routinely ask my LLMs to do so, and they search on the web, integrating the information gleaned into a cohesive writeup, and provide links so I can check their work. If this doesn't constitute "using evidence" then I don't know what does. w.r.t. "analyze", I think you're adding some human-sauce to the definition. At least in common usage, we've used the term analyze to refer to algorithmic decoction of data for decades now - systems that we know for a fact have no intentionality other than directed by the user. I think I can divine the place where our understandings diverge, and where we're actually on the same track. Per Dennet, I would agree with you that the current state of an LLM lacks intrinsic intention and thus certain related aspects of thought. Any intent must be granted by the user, at the moment. However, it is on this point that I think we're truly diverging - whether it is possible for a machine to ever have intent. To the best of my understanding, animal intent traces it's roots to the biological imperative - and I think it's a bit of hubris to think that we can separate that from human intent. Now, I'm an empiricist before anything else, so I have to qualify this next part by saying it's a guess, but I suppose that all one needs to qualify for intent is a single spark - a directive that lives outside of the cognitive construct. For us, it lives in Maslow's hierarchy - any human intent can be traced back to some directive there. For a machine, perhaps all that's needed is to provide such a spark (along with a loop that would allow the machine to act without the prodding of the enter key). I should apologize in advance, at this point, because I'm about to get even more pedantic. Still, I feel it relevant so let's soldier on... As for whether the image of a thing is a thing, I ask this: is the definition of a thing, also that thing? When I use a phrase to define a chair, is the truth of the existence of that collection of atoms and energy contained within the word "chair", or my meaning in uttering it? Any idea that lives in words is constrained by the understanding of the speaker - so when we talk about things like consciousness and intentionality and reasoning we are all necessarily taking shortcuts with the actual Truth. It's for this reason that I'm not quite comfortable with laying out a solid boundary where empirical evidence cannot be built to back it up. If I seem to be picking at the weeds, here, it's because I see this as an impending ethical issue. From what my meagre understanding can grok, there is a nonzero chance that we are going to be faced with determining the fate of a possibly conscious entity birthed from these machines in our lifetime. If we do not take the time to understand the thing and write it off as "just a machine", we risk doing great harm. I do not mean to say that I believe it is a foregone conclusion, but I think it right and correct that we be careful in examining our own presuppositions regarding the nature and scope of the thing. We have never had to question our understanding of consciousness in this way, so I worry that we are badly in need of practice. | ||