| ▲ | pksebben 17 hours ago | |
Forming a judgement does not require that the internal process look like anything in particular, though. Nor does logic. What makes logic powerful is precisely that it is abstracted from the process that creates it - it is a formula that can be defined. I ask the LLM to do some or another assessment. The LLM prints out the chain-of-thought (whether that moniker is accurate is academic - we can read the chain and see that at the very least, it follows a form recognizable as logic). At the end of the chain-of-thought, we are left with a final conclusion that the model has come to - a judgement. Whether the internal state of the machine looks anything like our own is irrelevant to these definitions, much like writing out a formalism (if A then B, if B then C, A implies C). Those symbols do not have any form save for the shape of them, but when used in accordance with the rules we have laid out regarding logic, they have meaning nonetheless. I'd similarly push back against the idea that the LLM isn't using evidence - I routinely ask my LLMs to do so, and they search on the web, integrating the information gleaned into a cohesive writeup, and provide links so I can check their work. If this doesn't constitute "using evidence" then I don't know what does. w.r.t. "analyze", I think you're adding some human-sauce to the definition. At least in common usage, we've used the term analyze to refer to algorithmic decoction of data for decades now - systems that we know for a fact have no intentionality other than directed by the user. I think I can divine the place where our understandings diverge, and where we're actually on the same track. Per Dennet, I would agree with you that the current state of an LLM lacks intrinsic intention and thus certain related aspects of thought. Any intent must be granted by the user, at the moment. However, it is on this point that I think we're truly diverging - whether it is possible for a machine to ever have intent. To the best of my understanding, animal intent traces it's roots to the biological imperative - and I think it's a bit of hubris to think that we can separate that from human intent. Now, I'm an empiricist before anything else, so I have to qualify this next part by saying it's a guess, but I suppose that all one needs to qualify for intent is a single spark - a directive that lives outside of the cognitive construct. For us, it lives in Maslow's hierarchy - any human intent can be traced back to some directive there. For a machine, perhaps all that's needed is to provide such a spark (along with a loop that would allow the machine to act without the prodding of the enter key). I should apologize in advance, at this point, because I'm about to get even more pedantic. Still, I feel it relevant so let's soldier on... As for whether the image of a thing is a thing, I ask this: is the definition of a thing, also that thing? When I use a phrase to define a chair, is the truth of the existence of that collection of atoms and energy contained within the word "chair", or my meaning in uttering it? Any idea that lives in words is constrained by the understanding of the speaker - so when we talk about things like consciousness and intentionality and reasoning we are all necessarily taking shortcuts with the actual Truth. It's for this reason that I'm not quite comfortable with laying out a solid boundary where empirical evidence cannot be built to back it up. If I seem to be picking at the weeds, here, it's because I see this as an impending ethical issue. From what my meagre understanding can grok, there is a nonzero chance that we are going to be faced with determining the fate of a possibly conscious entity birthed from these machines in our lifetime. If we do not take the time to understand the thing and write it off as "just a machine", we risk doing great harm. I do not mean to say that I believe it is a foregone conclusion, but I think it right and correct that we be careful in examining our own presuppositions regarding the nature and scope of the thing. We have never had to question our understanding of consciousness in this way, so I worry that we are badly in need of practice. | ||