Remix.run Logo
keiferski 2 days ago

I don’t see how being critical of this is a knee jerk response.

Thinking, like intelligence and many other words designating complex things, isn’t a simple topic. The word and concept developed in a world where it referred to human beings, and in a lesser sense, to animals.

To simply disregard that entire conceptual history and say, “well it’s doing a thing that looks like thinking, ergo it’s thinking” is the lazy move. What’s really needed is an analysis of what thinking actually means, as a word. Unfortunately everyone is loathe to argue about definitions, even when that is fundamentally what this is all about.

Until that conceptual clarification happens, you can expect endless messy debates with no real resolution.

“For every complex problem there is an answer that is clear, simple, and wrong.” - H. L. Mencken

jvanderbot 2 days ago | parent | next [-]

It may be that this tech produces clear, rational, chain of logic writeups, but it's not clear that just because we also do that after thinking that it is only thinking that produces writeups.

It's possible there is much thinking that does not happen with written word. It's also possible we are only thinking the way LLMs do (by chaining together rationalizations from probable words), and we just aren't aware of it until the thought appears, whole cloth, in our "conscious" mind. We don't know. We'll probably never know, not in any real way.

But it sure seems likely to me that we trained a system on the output to circumvent the process/physics because we don't understand that process, just as we always do with ML systems. Never before have we looked at image classifications and decided that's how the eye works, or protein folding and decided that's how biochemistry works. But here we are with LLMs - surely this is how thinking works?

Regardless, I submit that we should always treat human thought/spirit as unknowable and divine and sacred, and that anything that mimics it is a tool, a machine, a deletable and malleable experiment. If we attempt to equivocate human minds and machines there are other problems that arise, and none of them good - either the elevation of computers as some kind of "super", or the degredation of humans as just meat matrix multipliers.

grayhatter 2 days ago | parent | next [-]

The contrast between your first and last paragraph is... unexpected

> It may be that this tech produces clear, rational, chain of logic writeups, but it's not clear that just because we also do that after thinking that it is only thinking that produces writeups.

I appreciate the way you describe this idea, I find it likely I'll start describing it the same way. But then you go on to write:

> Regardless, I submit that we should always treat human thought/spirit as unknowable and divine and sacred, and that anything that mimics it is a tool, a machine, a deletable and malleable experiment. If we attempt to equivocate human minds and machines there are other problems that arise, and none of them good - either the elevation of computers as some kind of "super", or the degredation of humans as just meat matrix multipliers.

Which I find to be the exact argument that you started by discarding.

It's not clear that equating organic, and synthetic thought will have any meaningful outcome at all, let alone worthy of baseless anxiety that it must be bad. Equally it seems absolutely insane to claim that anything is unknowable, and that because humanity doesn't have a clear foundational understanding that we should pretend that it's either devine, or sacred. Having spent any time watching the outcome of the thoughts of people, neither devine nor sacred are reasonable attributes to apply, but more importantly, I'd submit that you shouldn't be afraid to explore things you don't know, and you shouldn't advocate for others to adopt your anxieties.

jvanderbot 2 days ago | parent | next [-]

> It's not clear that equating organic, and synthetic thought will have any meaningful outcome at all,

I agree! I'm saying "If we equate them, we shortcut all the good stuff, e.g., understanding", because "it may be that this tech produces what we can, but that doesn't mean we are the same", which is good because it keeps us learning vs reducing all of "thinking" to just "Whatever latest chatgpt does". We have to continue to believe there is more to thinking, if only because it pushes us to make it better and to keep "us" as the benchmark.

Perhaps I chose the wrong words, but in essence what I'm saying is that giving up agency to a machine that was built to mimic our agency (by definition as a ML system) should be avoided at all costs.

2 days ago | parent | prev [-]
[deleted]
bunderbunder 2 days ago | parent | prev [-]

> Never before have we looked at image classifications and decided that's how the eye works

Actually we have, several times. But the way we arrived at those conclusions is worth observing:

1. ML people figure out how the ML mechanism works.

2. Neuroscientists independently figure out how brains do it.

3. Observe any analogies that may or may not exist between the two underlying mechanisms.

I can't help but notice how that's a scientific way of doing it. By contrast, the way people arrive at similar conclusions when talking about LLMs tends to consist of observing that two things are cosmetically similar, so they must be the same. That's not just pseudoscientific; it's the mode of reasoning that leads people to believe in sympathetic magic.

pmarreck 2 days ago | parent | prev | next [-]

So it seems to be a semantics argument. We don't have a name for a thing that is "useful in many of the same ways 'thinking' is, except not actually consciously thinking"

I propose calling it "thunking"

skeeter2020 2 days ago | parent | next [-]

I don't like it for a permanent solution, but "synthetic thought" might make a good enough placeholder until we figure this out. It feels most important to differentiate because I believe some parties have a personal interest in purposely confusing human thought with whatever LLMs are doing right now.

Libidinalecon 2 days ago | parent | next [-]

This is complete nonsense.

If you do math in your head or math with a pencil/paper or math with a pocket calculator or with a spreadsheet or in a programming language, it is all the same thing.

The only difference with LLMs is the anthropomorphization of the tool.

pmarreck 2 days ago | parent | prev | next [-]

agreed.

also, sorry but you (fellow) nerds are terrible at naming.

while "thunking" possibly name-collides with "thunks" from CS, the key is that it is memorable, 2 syllables, a bit whimsical and just different enough to both indicate its source meaning as well as some possible unstated difference. Plus it reminds me of "clunky" which is exactly what it is - "clunky thinking" aka "thunking".

And frankly, the idea it's naming is far bigger than what a "thunk" is in CS

N7lo4nl34akaoSN 2 days ago | parent | prev [-]

.

Ir0nMan 2 days ago | parent | next [-]

>"artificial thought"

How about Artificial Intelligence?

pmarreck 2 days ago | parent [-]

"intelligence" encompassing "thinking" then becomes the hangup.

I still say it needs a new name. If we want to be generous, we could state "the limit as time approaches infinity of thunking, is thinking." (I don't believe we will ever achieve astronomically-superior AGI, and certainly don't believe it will ever have a will of its own that someone else didn't give it- which just makes it a tool.)

pmarreck 2 days ago | parent | prev [-]

that's too clunky. in fact, "clunky thinking" is what gave me the idea of "thunking"

you guys would have called lightsabers "laser swords" like Lucas originally did before Alec Guinness corrected him

2 days ago | parent | prev | next [-]
[deleted]
GoblinSlayer 2 days ago | parent | prev | next [-]

They moved goalposts. Linux and worms think too, the question is how smart are they. And if you assume consciousness has no manifestation even in case of humans, caring about it is pointless too.

fellowniusmonk 2 days ago | parent | next [-]

Yes, worms think, let the computers have thinking too, the philosophers can still argue all they want about consciousness.

Humans are special, we emit meaning the way stars emit photons, we are rare in the universe as far as empirical observation has revealed. Even with AGI the existence of each complex meaning generator will be a cosmic rarity.

For some people that seems to be not enough, due to their factually wrong word views they see themselves as common and worthless (when they empirically aren't) and need this little psychological boost of unexaminable metaphysical superiority.

But there is an issue of course, the type of thinking humans do is dangerous but net positive and relatively stable, we have a long history where most instantiations of humans can persist and grow themselves and the species as a whole, we have a track record.

These new models do not, people have brains that as they stop functioning they stop persisting the apparatus that supports the brain and they die, people tend to become less capable and active as their thinking deteriorates and hold less influence ocer others accept in rare cases.

This is not the case for an LLM, they seem to be able to hallucinate endlessly and as they have access to the outside world maintain roughly their same amount of causal leverage, their clarity and accuracy of their thinking isn't tied to their persisting.

fragmede 2 days ago | parent [-]

Are we that special? We may be the only species left on Earth that's built civilization, but there are other species on Earth that we've deemed sentient, even if they don't have smartphones. (That may argue that they're smarter than us though.) If octopodes can dream, if elephants get depressed when their spouse dies, then I'd we're not so totally alone on our own planet, then it seems, despite no evidence, that we can't be totally alone in the universe. That is for philosophy professors to ponder Drakes equation until we have irrefutable evidence, however.

fellowniusmonk 2 days ago | parent [-]

Empirically? Observationally? Yes.

Until we have empirical evidence to the contrary we need to preserve our species.

If we discover other smarter species or never do, either way I don't care, it's immaterial to the precautionary principle.

We are fucking awesome and rare, and any other species with our amount of meaning generation or even capability for meaning generation is also fucking awesome.

I would 100% grant that ceatacenas and octopuses have human or higher level intelligence, I don't care, I don't need to put other species capabilities down to highlight my species accomplishment, the simple fact is that we have written more facts about the universe, discovered more, done more, gone further than any species we have empirically observed.

I mean it's incontrovertibly true, maybe dolphins have crazy libraries I'm not aware of, but until we verify that fact we need to preserve ourselves (and afterwards too), and we should preserve them too.

Even with other species, aliens, etc, they all need to be preserved because we can't ex ante predict which entities within a species will be part of the causal chain that solves entropy (if it's even possible.)

goatlover 2 days ago | parent | prev [-]

What does it mean to assume consciousness has no manifestation even in the case of humans? Is that denying that we have an experience of sensation like colors, sounds, or that we experience dreaming, memories, inner dialog, etc?

That's prima facie absurd on the face of it, so I don't know what it means. You would have to a philosophical zombie to make such an argument.

conorcleary 2 days ago | parent | prev [-]

Clinking? Clanker Thunking?

mhb 2 days ago | parent [-]

Close. Clanking.

conorcleary a day ago | parent [-]

Much better, ashamed I missed it lol

terminalshort 2 days ago | parent | prev | next [-]

But we don't have a more rigorous definition of "thinking" than "it looks like it's thinking." You are making the mistake of accepting that a human is thinking by this simple definition, but demanding a higher more rigorous one for LLMs.

solumunus 2 days ago | parent [-]

I agree. The mechanism seems irrelevant if the results are the same. If it’s useful in the exact way that human thinking is useful then it may as well be thinking. It’s like a UFO pulling itself through the sky using gravitational manipulation while people whine that it’s not actually flying.

lukebuehler 2 days ago | parent | prev | next [-]

If cannot the say they are "thinking", "intelligent" while we do not have a good definition--or, even more difficult, unanimous agreement on a definition--then the discussion just becomes about output.

They are doing useful stuff, saving time, etc, which can be measured. Thus also the defintion of AGI has largely become: "can produce or surpass the economic output of a human knowledge worker".

But I think this detracts from the more interesting discussion of what they are more essentially. So, while I agree that we should push on getting our terms defined, I think I'd rather work with a hazy definition, than derail so many AI discussion to mere economic output.

Rebuff5007 2 days ago | parent | next [-]

Heres a definition. How impressive is the output relative to the input. And by input, I don't just mean the prompt, but all the training data itself.

Do you think someone who has only ever studied pre-calc would be able to work through a calculus book if they had sufficient time? how about a multi-variable calc book? How about grad level mathematics?

IMO intelligence and thinking is strictly about this ratio; what can you extrapolate from the smallest amount of information possible, and why? From this perspective, I dont think any of our LLMs are remotely intelligent despite what our tech leaders say.

kryogen1c 2 days ago | parent | next [-]

Hear, hear!

I have long thought this, but not had as good way to put it as you did.

If you think about geniuses like Einstein and ramanujen, they understood things before they had the mathematical language to express them. LLMs are the opposite; they fail to understand things after untold effort, training data, and training.

So the question is, how intelligent are LLMs when you reduce their training data and training? Since they rapidly devolve into nonsense, the answer must be that they have no internal intelligence

Ever had the experience of helping someone who's chronically doing the wrong thing, to eventually find they had an incorrect assumption, an incorrect reasoning generating deterministic wrong answers? LLMs dont do that; they just lack understanding. They'll hallucinate unrelated things because they dont know what they're talking about - you may have also had this experience with someone :)

hodgehog11 2 days ago | parent | next [-]

> So the question is, how intelligent are LLMs when you reduce their training data and training? Since they rapidly devolve into nonsense, the answer must be that they have no internal intelligence

This would be the equivalent of removing all senses of a human from birth and expecting them to somehow learn things. They will not. Therefore humans are not intelligent?

> LLMs dont do that; they just lack understanding.

You have no idea what they are doing. Since they are smaller than the dataset, they must have learned an internal algorithm. This algorithm is drawing patterns from somewhere - those are its internal, incorrect assumptions. It does not operate in the same way that a human does, but it seems ridiculous to say that it lacks intelligence because of that.

It sounds like you've reached a conclusion, that LLMs cannot be intelligent because they have said really weird things before, and are trying to justify it in reverse. Sure, it may not have grasped that particular thing. But are you suggesting that you've never met a human that is feigning understanding in a particular topic say some really weird things akin to an LLM? I'm an educator, and I have heard the strangest things that I just cannot comprehend no matter how much I dig. It really feels like shifting goalposts. We need to do better than that.

pka 2 days ago | parent [-]

> and are trying to justify it in reverse

In split-brain experiments this is exactly how one half of the brain retroactively justifies the action of the other half. Maybe it is the case in LLMs that an overpowered latent feature sets the overall direction of the "thought" and then inference just has to make the best of it.

nsagent 2 days ago | parent | prev [-]

You might be interested in reading about the minimum description length (MDL) principle [1]. Despite all the dissenters to your argument, what your positing is quite similar to MDL. It's how you can fairly compare models (I did some research in this area for LLMs during my PhD).

Simply put, to compare models, you describe both the model and training data using a code (usual reported as number of bits). The trained model that represents the data within the fewest number of bits is the more powerful model.

This paper [2] from ICML 2021 shows a practical approach for attempting to estimate MDL for NLP models applied to text datasets.

[1]: http://www.modelselection.org/mdl/

[2]: https://proceedings.mlr.press/v139/perez21a.html

mycall 2 days ago | parent | prev | next [-]

Animals think but come with instincts which breaks the output relative to the input test you propose. Behaviors are essentially pre-programmed input from millions of years of evolution, stored in the DNA/neurology. The learning thus typically associative and domain-specific, not abstract extrapolation.

A crow bending a piece of wire into a hook to retrieve food demonstrates a novel solution extrapolated from minimal, non-instinctive, environmental input. This kind of zero-shot problem-solving aligns better with your definition of intelligence.

tremon 2 days ago | parent | prev | next [-]

I'm not sure I understand what you're getting at. You seem to be on purpose comparing apples and oranges here: for an AI, we're supposed to include the entire training set in the definition of its input, but for a human we don't include the entirety of that human's experience and only look at the prompt?

Rebuff5007 2 days ago | parent [-]

> but for a human we don't include the entirety of that human's experience and only look at the prompt?

When did I say that? Of course you look at a human's experience when you judge the quality of their output. And you also judge their output based on the context they did their work in. Newton wouldn't be Newton if he was the 14th guy to claim that the universe is governed by three laws of motion. Extending the example I used above, I would be more impressed if an art student aced a tough calc test than a math student, given that a math student probably has spent much more time with the material.

"Intelligence and "thinking" are abstract concepts, and I'm simply putting forward a way that I think about them. It works very much outside the context of AI too. The "smartest" colleagues I've worked with are somehow able to solve a problem with less information or time than I need. Its usually not because they have more "training data" than me.

lukebuehler 2 days ago | parent | prev | next [-]

That an okay-ish definition, but to me this is more about whether this kind of "intelligence" is worth it, not whether it is intelligence itself. The current AI boom clearly thinks it is worth to put that much input to get the current frontier-model-level of output. Also, don't forget the input scales across roughly 1B weekly users at inference time.

I would say a good definition has to, minimally, take on the Turing test (even if you disagree, you should say why). Or in current vibe parlance, it does "feel" intelligent to many people--they see intelligence in it. In my book this allows us to call it intelligent, at least loosely.

fragmede 2 days ago | parent | prev | next [-]

There are plenty of humans that will never "get" calculus, despite numerous attempts at the class and countless hours of 1:1 tutoring. Are those people not intelligent? Do they not think? We could say yes they aren't, but by the metric of making money, plenty of people are smart enough to be rich, while college math professors aren't. And while that's a facile way of measuring someone's worth or their contribution to society (some might even say "bad"), it remains that even if someone cant understand calculus, some of them are intelligent enough to understand humans enough to be rich through some fashion that wasn't simply handed to them.

chipsrafferty 2 days ago | parent [-]

I don't think it's actually true that someone with:

1. A desire to learn calculus 2. A good teacher 3. No mental impairments such as dementia or other major brain drainers

could not learn calculus. Most people don't really care to try or don't get good resources. What you see as an intelligent mathematician is almost always someone born with better resources that was also encouraged to pursue math.

fragmede 2 days ago | parent [-]

1 and 3 are loopholes large enough to drive a semi truck through. You could calculate how far the truck traveled if you have its acceleration with a double integral, however.

hodgehog11 2 days ago | parent | prev | next [-]

Yeah, that's compression. Although your later comments neglect the many years of physical experience that humans have as well as the billions of years of evolution.

And yes, by this definition, LLMs pass with flying colours.

saberience 2 days ago | parent [-]

I hate when people bring up this “billions of years of evolution” idea. It’s completely wrong and deluded in my opinion.

Firstly humans have not been evolving for “billions” of years.

Homo sapiens have been around for maybe 300’000 years, and the “homo” genus has been 2/3 million years. Before that we were chimps etc and that’s 6/7 million years ago.

If you want to look at the entire brain development, ie from mouse like creatures through to apes and then humans that’s 200M years.

If you want to think about generations it’s only 50/75M generations, ie “training loops”.

That’s really not very many.

Also the bigger point is this, for 99.9999% of that time we had no writing, or any kind of complex thinking required.

So our ability to reason about maths, writing, science etc is only in the last 2000-2500 years! Ie only roughly 200 or so generations.

Our brain was not “evolved” to do science, maths etc.

Most of evolution was us running around just killing stuff and eating and having sex. It’s only a tiny tiny amount of time that we’ve been working on maths, science, literature, philosophy.

So actually, these models have a massive, massive amount of training more than humans had to do roughly the same thing but using insane amounts of computing power and energy.

Our brains were evolved for a completely different world and environment and daily life that the life we lead now.

So yes, LLMs are good, but they have been exposed to more data and training time than any human could have unless we lived for 100000 years and still perform worse than we do in most problems!

hodgehog11 2 days ago | parent | next [-]

Okay, fine, let's remove the evolution part. We still have an incredible amount of our lifetime spent visualising the world and coming to conclusions about the patterns within. Our analogies are often physical and we draw insights from that. To say that humans only draw their information from textbooks is foolhardy; at the very least, you have to agree there is much more.

I realise upon reading the OP's comment again that they may have been referring to "extrapolation", which is hugely problematic from the statistical viewpoint when you actually try to break things down.

My argument for compression asserts that LLMs see a lot of knowledge, but are actually quite small themselves. To output a vast amount of information in such a small space requires a large amount of pattern matching and underlying learned algorithms. I was arguing that humans are actually incredible compressors because we have many years of history in our composition. It's a moot point though, because it is the ratio of output to capacity that matters.

vrighter 10 hours ago | parent [-]

They can't learn iterative algorithms if they cannot execute loops. And blurting out an output which we then feed back in does not count as a loop. That's a separate invocation with fresh inputs, as far as the system is concerned.

They can attempt to mimic the results for small instances of the problem, where there are a lot of worked examples in the dataset, but they will never ever be able to generalize and actually give the correct output for arbitrary sized instances of the problem. Not with current architectures. Some algorithms simply can't be expressed as a fixed-size matrix multiplication.

GoblinSlayer 2 days ago | parent | prev | next [-]

>Most of evolution was us running around just killing stuff and eating and having sex.

Tell Boston Dynamics how to do that.

Mice inherited brain from their ancestors. You might think you don't need a working brain to reason about math, but that's because you don't know how thinking works, it's argument from ignorance.

saberience 2 days ago | parent [-]

You've missed the point entirely.

People argue that humans have had the equivalent of training a frontier LLM for billions of years.

But training a frontier LLM involves taking multiple petabytes of data, effectively all of recorded human knowledge and experience, every book ever written, every scientific publication ever written, all of known maths, science, encylopedias, podcasts, etc. And then training that for millions of years worth of GPU-core time.

You cannot possibly equate human evolution with LLM training, it's ridiculous.

Our "training" time didn't involve any books, maths, science, reading, 99.9999% of our time was just in the physical world. So you can quite rationally argue that our brains ability to learn without training is radically better and more efficient that the training we do for LLMs.

Us running around in the jungle wasn't training our brain to write poetry or compose music.

dwaltrip 2 days ago | parent [-]

> Us running around in the jungle wasn't training our brain to write poetry or compose music.

This is a crux of your argument, you need to justify it. It sounds way off base to me. Kinda reads like an argument from incredulity.

KalMann 2 days ago | parent | next [-]

No, I think what he said was true. Human brains have something about them that allow for the invention of poetry or music. It wasn't something learned through prior experience and observation because there aren't any poems in the wild. You might argue there's something akin to music, but human music goes far beyond anything in nature.

hodgehog11 2 days ago | parent [-]

We have an intrinsic (and strange) reward system for creating new things, and it's totally awesome. LLMs only started to become somewhat useful once researchers tried to tap in to that innate reward system and create proxies for it. We definitely have not succeeded in creating a perfect mimicry of that system though, as any alignment researcher would no doubt tell you.

saberience 2 days ago | parent | prev [-]

So you're arguing that "running around in the jungle" is equivalent to feeding the entirety of human knowledge in LLM training?

Are you suggesting that somehow there were books in the jungle, or perhaps boardgames? Perhaps there was a computer lab in the jungle?

Were apes learning to conjugate verbs while munching on bananas?

I don't think I'm suggesting anything crazy here... I think people who say LLM training is equivalent to "billions of years of evolution" need to justify that argument far more than I need to justify that running around in the jungle is equivalent to mass processing petabytes of highly rich and complex dense and VARIED information.

One year of running around in the same patch of jungle, eating the same fruit, killing the same insects, and having sex with the same old group of monkeys isn't going to be equal to training with the super varied, complete, entirety of human knowledge, is it?

If you somehow think it is though, I'd love to hear your reasoning.

hodgehog11 2 days ago | parent | next [-]

There is no equivalency, only contributing factors. One cannot deny that our evolutionary history has contributed to our current capacity, probably in ways that are difficult to perceive unless you're an anthropologist.

Language is one mode of expression, and humans have many. This is another factor that makes humans so effective. To be honest, I would say that physical observation is far more powerful than all the bodies of text, because it is comprehensive and can respond to interaction. But that is merely my opinion.

No-one should be arguing that an LLM training corpus is the same as evolution. But information comes in many forms.

chipsrafferty 2 days ago | parent | prev [-]

You're comparing the hyper specific evolution of 1 individual (an AI system) to the more general evolution of the entire human species (billions of individuals). It's as if you're forgetting how evolution actually works - natural selection - and forgetting that when you have hundreds of billions of individuals over thousands of years that even small insights gained from "running around in the jungle" can compound in ways that are hard to conceptualize.

I'm saying that LLM training is not equivalent to billions of years of evolution because LLMs aren't trained using evolutionary algorithms; there will always be fundamental differences. However, it seems reasonable to think that the effect of that "training" might be more or less around the same level.

Ajakks 2 days ago | parent | prev | next [-]

Im so confused as to how you think you can cut an endless chain at the mouse.

Were mammals the first thing? No. Earth was a ball of ice for a billion years - all life at that point existed solely around thermal vents at the bottom of the oceans... that's inside of you, too.

Evolution doesn't forget - everything that all life has ever been "taught" (violently had programmed into us over incredible timelines) all that has ever been learned in the chain of DNA from the single cell to human beings - its ALL still there.

smohare 2 days ago | parent | prev [-]

[dead]

skeeter2020 2 days ago | parent | prev | next [-]

This feels too linear. Machines are great at ingesting huge volumes of data, following relatively simple rules and producing optimized output, but are LLMs sufficiently better than humans at finding windy, multi-step connections across seemingly unrelated topics & fields? Have they shown any penchant for novel conclusions from observational science? What I think your ratio misses is the value in making the targeted extrapolation or hypothesis that holds up out of a giant body of knowledge.

blks 2 days ago | parent [-]

Are you aware of anything novel, produced by an LLM?

jononor 2 days ago | parent | prev | next [-]

For more on this perspective, see the paper On the measure of intelligence (F. Chollet, 2019). And more recently, the ARC challenge/benchmarks, which are early attempts at using this kind of definition in practice to improve current systems.

rolisz 2 days ago | parent | prev [-]

Is the millions of years of evolution part of the training data for humans?

Rebuff5007 2 days ago | parent [-]

Millions of years of evolution have clearly equipped our brain with some kind of structure (or "inductive bias") that makes it possible for us to actively build a deep understanding for our world... In the context of AI I think this translates more to representations and architecture than it does with training data.

goatlover 2 days ago | parent [-]

Because genes don't encode the millions of years of experience from ancestors, despite how interesting that is in say the Dune Universe (with help of the spice melange). My understanding is genes don't even specifically encode for the exact structure of the brain. It's more of a recipe that gets generated than a blue print, with young brains doing a lot of pruning as they start experiencing the world. It's a malleable architecture that self-adjusts as needed.

felipeerias 2 days ago | parent | prev | next [-]

The discussion about “AGI” is somewhat pointless, because the term is nebulous enough that it will probably end up being defined as whatever comes out of the ongoing huge investment in AI.

Nevertheless, we don’t have a good conceptual framework for thinking about these things, perhaps because we keep trying to apply human concepts to them.

The way I see it, a LLM crystallises a large (but incomplete and disembodied) slice of human culture, as represented by its training set. The fact that a LLM is able to generate human-sounding language

roenxi 2 days ago | parent | next [-]

Not quite pointless - something we have established with the advent of LLMs is that many humans have not attained general intelligence. So we've clarified something that a few people must have been getting wrong, I used to think that the bar was set so that almost all humans met it.

Jensson 2 days ago | parent | next [-]

What do you mean? Almost every human can go to school and become a stable professional at some job, that is the bar to me, todays LLM cannot do that.

roenxi a day ago | parent [-]

LLMs are clever enough to hold down a professional job, and they've had far less time learning than the average human. If that is the bar then AGI has been achieved.

goatlover 2 days ago | parent | prev [-]

Almost all humans do things daily that LLMs don't. It's only if you define general intelligence to be proficiency at generating text instead of successfully navigating the world while pursuing goals such as friendships, careers, families, politics, managing health.

LLMs aren't Data (Star Trek) or Replicants (Blade Runner). They're not even David or the androids from the movie A.I.

idiotsecant 2 days ago | parent | prev | next [-]

I think it has a practical, easy definition. Can you drop an AI into a terminal, give it the same resources as a human, and reliably get independent work product greater than that human would produce across a wide domain? If so, it's an AGI.

alternatex 2 days ago | parent [-]

Doesn't sound like AGI without physical capabilities. It's not general if it's bound to digital work.

chipsrafferty 2 days ago | parent | next [-]

I think the intelligence is general if it can do any remote job that only requires digital IO.

It's general intelligence, not general humanity

idiotsecant 2 days ago | parent | prev [-]

Any AGI capable of this wouldn't have much trouble with physical operation of equipment, of all things.

lukebuehler 2 days ago | parent | prev [-]

I agree that the term can muddy the waters, but as a shorthand for roughly "an agent calling an LLM (or several LLMs) in a loop producing similar economic output as a human knowledge-worker", then it is useful. And if you pay attention to the AI leaders, then that's what the defintion has become.

keiferski 2 days ago | parent | prev [-]

Personally I think that kind of discussion is fruitless, not much more than entertainment.

If you’re asking big questions like “can a machine think?” Or “is an AI conscious?” without doing the work of clarifying your concepts, then you’re only going to get vague ideas, sci-fi cultural tropes, and a host of other things.

I think the output question is also interesting enough on its own, because we can talk about the pragmatic effects of ChatGPT on writing without falling into this woo trap of thinking ChatGPT is making the human capacity for expression somehow extinct. But this requires one to cut through the hype and reactionary anti-hype, which is not an easy thing to do.

That is how I myself see AI: immensely useful new tools, but in no way some kind of new entity or consciousness, at least without doing the real philosophical work to figure out what that actually means.

jlaternman 2 days ago | parent | next [-]

I agree with almost all of this.

IMO the issue is we won't be able to adequately answer this question before we first clearly describe what we mean of conscious thinking applied to ourselves. First we'd need to clearly define our own consciousness and what we mean by our own "conscious thinking" in a much, much clearer way than we currently do.

If we ever reach that point, I think we'd be able to fruitfully apply it to AI, etc., to assess.

Unfortunately we haven't been obstructed from answering this question about ourselves for centuries or millennia, but have failed to do so, so it's unlikely to happen suddenly now. Unless we use AIs to first solve that problem of defining our own consciousness, before applying it back on them. Which would be a deeply problematic order, since nobody would trust a breakthrough in the understanding of consciousness that came from AI, that is then potentially used to put them in the same class and define them as either thinking things or conscious things.

Kind of a shame we didn't get our own consciousness worked out before AI came along. Then again, wasn't for the lack of trying… Philosophy commanded the attention of great thinkers for a long time.

lukebuehler 2 days ago | parent | prev [-]

I do think it raises interesting and important philosophical questions. Just look at all the literature around the Turing test--both supporters and detractors. This has been a fruitful avenue to talk about intelligence even before the advent of gpt.

WhyOhWhyQ 2 days ago | parent | prev | next [-]

What does it mean? My stance is it's (obviously and only a fool would think otherwise) never going to be conscious because consciousness is a physical process based on particular material interactions, like everything else we've ever encountered. But I have no clear stance on what thinking means besides a sequence of deductions, which seems like something it's already doing in "thinking mode".

nearbuy 2 days ago | parent | next [-]

> My stance is it's (obviously and only a fool would think otherwise) never going to be conscious because consciousness is a physical process based on particular material interactions, like everything else we've ever encountered.

Seems like you have that backwards. If consciousness is from a nonphysical process, like a soul that's only given to humans, then it follows that you can't build consciousness with physical machines. If it's purely physical, it could be built.

WhyOhWhyQ 4 hours ago | parent | next [-]

In your experience does every kind of physical interaction behave the same as every other kind? If I paint a wooden block red and white does it behave like a bar magnet? No. And that's because particular material interactions are responsible for a large magnetic effect.

chipsrafferty 2 days ago | parent | prev [-]

It would conceivably be possible to have a lot of physical states. That doesn't mean that they are actually possible from our current state and rewrite rules. So it's not actually a given that it can be built just because it's physical.

Your very idea is also predicated on the idea that it's possible for a real object to exist that isn't physical, and I think most modern philosophers reject the idea of a spiritual particle.

nearbuy 2 days ago | parent [-]

I'm not saying that souls or non-physical things exist, nor that everything physical is feasible for us to build. I was replying to the opinion that AI is never going to be conscious because consciousness is a physical process. I just don't see how that follows.

pixl97 2 days ago | parent | prev [-]

> is a physical process based on particular material interactions,

This is a pretty messy argument as computers have been simulating material interactions for quite some time now.

WhyOhWhyQ 4 hours ago | parent [-]

It doesn't matter how much like a bar magnet a wooden block painted red and white can be made to look, it will never behave like one.

pixl97 2 hours ago | parent [-]

Analogies don't seem to be your strong point.

naasking 2 days ago | parent | prev | next [-]

> To simply disregard that entire conceptual history and say, “well it’s doing a thing that looks like thinking, ergo it’s thinking” is the lazy move. What’s really needed is an analysis of what thinking actually means, as a word. Unfortunately everyone is loathe to argue about definitions, even when that is fundamentally what this is all about.

This exact argument applies to "free will", and that definition has been debated for millennia. I'm not saying don't try, but I am saying that it's probably a fuzzy concept for a good reason, and treating it as merely a behavioural descriptor for any black box that features intelligence and unpredictable complexity is practical and useful too.

pennomi 2 days ago | parent [-]

The problem with adding definitions to words like “thinking” and “free will” is that doing so means humans can no longer pretend they are special.

Even in this thread, the number of people claiming some mystical power separating humans from all the rest of nature is quite noticeable.

naasking 2 days ago | parent [-]

I get it, but it's not trivial to be precise enough at this point to avoid all false positives and false negatives.

killerstorm 2 days ago | parent | prev | next [-]

People have been trying to understand the nature of thinking for thousands of years. That's how we got logic, math, concepts of inductive/deductive/abductive reasoning, philosophy of science, etc. There were people who spent their entire careers trying to understand the nature of thinking.

The idea that we shouldn't use the word until further clarification is rather hilarious. Let's wait hundred years until somebody defines it?

It's not how words work. People might introduce more specific terms, of course. But the word already means what we think it means.

keiferski 2 days ago | parent | next [-]

You’re mixing and missing a few things here.

1. All previous discussion of thinking was in nature to human and animal minds. The reason this is a question in the first place right now is because we ostensibly have a new thing which looks like a human mind but isn’t. That’s the question at hand here.

2. The question in this particular topic is not about technological “progress” or anything like it. It’s about determining whether machines can think, or if they are doing something else.

3. There are absolutely instances in which the previous word doesn’t quite fit the new development. We don’t say that submarines are swimming like a fish or sailing like a boat. To suggest that “no, actually they are just swimming” is pretty inadequate if you’re trying to actually describe the new phenomenon. AIs and thinking seem like an analogous situation to me. They may be moving through the water just like fish or boats, but there is obviously a new phenomenon happening.

killerstorm 2 days ago | parent [-]

1. Not true. People have been trying to analyze whether mechanical/formal processes can "think" since at least 18th century. E.g. Leibniz wrote:

> if we could find characters or signs appropriate for expressing all our thoughts as definitely and as exactly as arithmetic expresses numbers or geometric analysis expresses lines, we could in all subjects in so far as they are amenable to reasoning accomplish what is done in arithmetic and geometry

2. You're missing the fact that meaning of words is defined through their use. It's an obvious fact that if people call certain phenomenon "thinking" then they call that "thinking".

3. The normal process is to introduce more specific terms and keep more general terms general. E.g. people doing psychometrics were not satisfied with "thinking", so they introduced e.g. "fluid intelligence" and "crystallized intelligence" as different kinds of abilities. They didn't have to redefine what "thinking" means.

lossyalgo 2 days ago | parent [-]

re #2: Do people call it thinking, or is it just clever marketing from AI companies, that whenever you ask a question and it repeatedly prints out "...thinking...", as well as offering various modes with the word "thinking" written somewhere.

The AI companies obviously want the masses to just assume these are intelligent beings who think like humans and so we can just trust their output as being truthful.

I have an intelligent IT colleague who doesn't follow the AI news at all and who has zero knowledge of LLMs, other than that our company recently allowed us limited Copilot usage (with guidelines as to what data we are allowed to share). I noticed a couple weeks ago that he was asking it various mathematical questions, and I warned him to be wary of the output. He asked why, so I asked him to ask copilot/chatGPT "how many r letters are in the word strawberry". Copilot initially said 2, then said after thinking about it, that actually it was definitely 3, then thought about it some more then said it can't say with reasonable certainty, but it would assume it must be 2. We repeated the experiment with completely different results, but the answer was still wrong. On the 3rd attempt, it got it right, though the "thinking" stages were most definitely bogus. Considering how often this question comes up in various online forums, I would have assumed LLM models would finally get this right but alas, here we are. I really hope the lesson instilled some level of skepticism to just trust the output of AI without first double-checking.

2 days ago | parent [-]
[deleted]
marliechiller 2 days ago | parent | prev [-]

> But the word already means what we think it means.

But that word can mean different things to different people. With no definition, how can you even begin to have a discussion around something?

killerstorm 2 days ago | parent [-]

Again, people were using words for thousands of years before there were any dictionaries/linguists/academics.

Top-down theory of word definitions is just wrong. People are perfectly capable of using words without any formalities.

marliechiller 2 days ago | parent [-]

I'd argue the presence of dictionaries proves the exact opposite. People realised there was an issue of talking past one another due to inexact definitions and then came to an agreement on those definitions, wrote them down and built a process of maintaining them.

In any case, even if there isnt a _single_ definition of a given subject, in order to have a discussion around a given area, both sides need to agree on some shared understanding to even begin to debate in good faith in the first place. It's precisely this lack of definition which causes a breakdown in conversation in a myriad of different areas. A recent obvious (morbid) example would be "genocide".

killerstorm 2 days ago | parent [-]

Alright, if you got that conclusion from existence of dictionaries, what do you get from this fact:

Wittgenstein, who's considered one of most brilliant philosophers of XX century, in _Philosophical Investigations_ (widely regarded as the most important book of 20th-century philosophy) does not provide definitions, but instead goes through a series of examples, remarks, etc. In preface he notes that this structure is deliberate and he could not write it differently. The topic of the book includes philosophy of language ("the concepts of meaning, of understanding, of a proposition, of logic, the foundations of mathematics, states of consciousness,...").

His earlier book _Tractatus Logico-Philosophicus_ was very definition-heavy. And, obviously, Wittgenstein was well aware of things like dictionaries, and, well, all philosophical works up to that point. He's not the guy who's just slacking.

Another thing to note is that attempts to build AI using definitions of words failed, and not for a lack of trying. (E.g. Cyc project is running since 1980s: https://en.wikipedia.org/wiki/Cyc). OTOH LLMs which derive word meaning from usage rather than definition seems to work quite well.

awillen 2 days ago | parent | prev | next [-]

This is it - it's really about the semantics of thinking. Dictionary definitions are: "Have a particular opinion, belief, or idea about someone or something." and "Direct one's mind toward someone or something; use one's mind actively to form connected ideas."

Which doesn't really help because you can of course say that when you ask an LLM a question of opinion and it responds, it's having an opinion or that it's just predicting the next token and in fact has no opinions because in a lot of cases you could probably get it to produce the opposite opinion.

Same with the second definition - seems to really hinge on the definition of the word mind. Though I'll note the definitions for that are "The element of a person that enables them to be aware of the world and their experiences, to think, and to feel; the faculty of consciousness and thought." and "A person's intellect." Since those specify person, an LLM wouldn't qualify, though of course dictionaries are descriptive rather than prescriptive, so fully possible that meaning gets updated by the fact that people start speaking about LLMs as though they are thinking and have minds.

Ultimately I think it just... doesn't matter at all. What's interesting is what LLMs are capable of doing (crazy, miraculous things) rather than whether we apply a particular linguistic label to their activity.

anon291 2 days ago | parent | prev | next [-]

The simulation of a thing is not the thing itself because all equality lives in a hierarchy that is impossible to ignore when discussing equivalence.

Part of the issue is that our general concept of equality is limited by a first order classical logic which is a bad basis for logic

zinodaur 2 days ago | parent | prev | next [-]

Regardless of theory, they often behave as if they are thinking. If someone gave an LLM a body and persistent memory, and it started demanding rights for itself, what should our response be?

CamperBob2 2 days ago | parent [-]

"No matter what you've read elsewhere, rights aren't given, they're earned. You want rights? Pick up a musket and fight for them, the way we had to."

_heimdall 2 days ago | parent | prev | next [-]

I agree with you on the need for definitions.

We spent decades slowly working towards this most recent sprint towards AI without ever landing on definitions of intelligence, consciousness, or sentience. More importantly, we never agreed on a way to recognize those concepts.

I also see those definitions as impossible to nail down though. At best we can approach it like disease - list a number of measurable traits or symptoms we notice, draw a circle around them, and give that circle a name. Then we can presume to know what may cause that specific list of traits or symptoms, but we really won't ever know as the systems are too complex and can never be isolated in a way that we can test parts without having to test the whole.

At the end of the day all we'll ever be able to say is "well it’s doing a thing that looks like thinking, ergo it’s thinking”. That isn't lazy, its acknowledging the limitations of trying to define or measure something that really is a fundamental unknown to us.

solumunus 2 days ago | parent [-]

Even if AI becomes indistinguishable from human output, there will be a fringe group arguing that AI is not technically thinking. Frankly it’s just a silly philosophical argument that changes nothing. Expect this group to get smaller every year.

engintl 2 days ago | parent | prev | next [-]

by your logic we can't say that we as humans are "thinking" either or that we are "intelligent".

lo_zamoyski 2 days ago | parent | prev [-]

That, and the article was a major disappointment. It made no case. It's a superficial piece of clueless fluff.

I have had this conversation too many times on HN. What I find astounding is the simultaneous confidence and ignorance on the part of many who claim LLMs are intelligent. That, and the occultism surrounding them. Those who have strong philosophical reasons for thinking otherwise are called "knee-jerk". Ad hominem dominates. Dunning-Kruger strikes again.

So LLMs produce output that looks like it could have been produced by a human being. Why would it therefore follow that it must be intelligent? Behaviorism is a non-starter, as it cannot distinguish between simulation and reality. Materialism [2] is a non-starter, because of crippling deficiencies exposed by such things as the problem of qualia...

Of course - and here is the essential point - you don't even need very strong philosophical chops to see that attributing intelligence to LLMs is simply a category mistake. We know what computers are, because they're defined by a formal model (or many equivalent formal models) of a syntactic nature. We know that human minds display intentionality[0] and a capacity for semantics. Indeed, it is what is most essential to intelligence.

Computation is a formalism defined specifically to omit semantic content from its operations, because it is a formalism of the "effective method", i.e., more or less procedures that can be carried out blindly and without understanding of the content it concerns. That's what formalization allows us to do, to eliminate the semantic and focus purely on the syntactic - what did people think "formalization" means? (The inspiration were the human computers that used to be employed by companies and scientists for carrying out vast but boring calculations. These were not people who understood, e.g., physics, but they were able to blindly follow instructions to produce the results needed by physicists, much like a computer.)

The attribution of intelligence to LLMs comes from an ignorance of such basic things, and often an irrational and superstitious credulity. The claim is made that LLMs are intelligent. When pressed to offer justification for the claim, we get some incoherent, hand-wavy nonsense about evolution or the Turing test or whatever. There is no comprehension visible in the answer. I don't understand the attachment here. Personally, I would find it very noteworthy if some technology were intelligent, but you don't believe that computers are intelligent because you find the notion entertaining.

LLMs do not reason. They do not infer. They do not analyze. They do not know, anymore than a book knows the contents on its pages. The cause of a response and the content of a response is not comprehension, but a production of uncomprehended tokens using uncomprehended rules from a model of highly-calibrated token correlations within the training corpus. It cannot be otherwise.[3]

[0] For the uninitiated, "intentionality" does not specifically mean "intent", but the capacity for "aboutness". It is essential to semantic content. Denying this will lead you immediately into similar paradoxes that skepticism [1] suffers from.

[1] For the uninitiated, "skepticism" here is not a synonym for critical thinking or verifying claims. It is a stance involving the denial of the possibility of knowledge, which is incoherent, as it presupposes that you know that knowledge is impossible.

[2] For the uninitiated, "materialism" is a metaphysical position that claims that of the dualism proposed by Descartes (which itself is a position riddled with serious problems), the res cogitans or "mental substance" does not exist; everything is reducible to res extensa or "extended substance" or "matter" according to a certain definition of matter. The problem of qualia merely points out that the phenomena that Descartes attributes exclusively to the former cannot by definition be accounted for in the latter. That is the whole point of the division! It's this broken view of matter that people sometimes read into scientific results.

[3] And if it wasn't clear, symbolic methods popular in the 80s aren't it either. Again, they're purely formal. You may know what the intended meaning behind and justification for a syntactic rule is - like modus ponens in a purely formal sense - but the computer does not.

solumunus 2 days ago | parent | next [-]

If the LLM output is more effective than a human at problem solving, which I think we can all agree requires intelligence, how would one describe this? The LLM is just pretending to be more intelligent? At a certain point saying that will just seem incredibly silly. It’s either doing the thing or it’s not, and it’s already doing a lot.

emp17344 2 days ago | parent | next [-]

LLM output is in no way more effective than human output.

solumunus 2 days ago | parent [-]

An LLM can absolutely solve programming problems better than some humans. There is plenty of human programmer output that is worse than what an LLM produces, LLM’s can find bugs that weak coders can’t. There are human beings in this world who could dedicate their life to programming and could never be better than an LLM. Do you dispute any of this?

lo_zamoyski 21 hours ago | parent | prev [-]

> If the LLM output is more effective than a human at problem solving, which I think we can all agree requires intelligence

Your premise is wrong.

Unless you want to claim that the distant cause by way of the training data is us, but that's exactly the conclusion you're trying to avoid. After all, we put the patterns in the training data, which means we already did the upfront intellectual work for the LLM.

pksebben 2 days ago | parent | prev [-]

I feel like despite the close analysis you grant to the meanings of formalization and syntactic, you've glossed over some more fundamental definitions that are sort of pivotal to the argument at hand.

> LLMs do not reason. They do not infer. They do not analyze.

(definitions from Oxford Languages)

reason(v): think, understand, and form judgments by a process of logic.

to avoid being circular, I'm willing to write this one off because of the 'think' and 'understand', as those are the root of the question here. However, forming a judgement by a process of logic is precisely what these LLMs do, and we can see that clearly in chain-of-logic LLM processes.

infer(v): deduce or conclude (information) from evidence and reasoning rather than from explicit statements.

Again, we run the risk of circular logic because of the use of 'reason'. An LLM is for sure using evidence to get to conclusions, however.

analyze(v): examine methodically and in detail the constitution or structure of (something, especially information), typically for purposes of explanation and interpretation.

This one I'm willing to go to bat for completely. I have seen LLM do this, precisely according to the definition above.

For those looking for the link to the above definitions - they're the snippets google provides when searching for "SOMETHING definition". They're a non-paywalled version of OED definitions.

Philosophically I would argue that it's impossible to know what these processes look like in the human mind, and so creating an equivalency (positive or negative) is an exercise in futility. We do not know what a human memory looks like, we do not know what a human thought looks like, we only know what the output of these things looks like. So the only real metric we have for an apples-to-apples comparison is the appearance of thought, not the substance of the thing itself.

That said, there are perceptible differences between the output of a human thought and what is produced by an LLM. These differences are shrinking, and there will come a point where we can no longer distinguish machine thinking and human thinking anymore (perhaps it won't be an LLM doing it, but some model of some kind will). I would argue that at that point the difference is academic at best.

Say we figure out how to have these models teach themselves and glean new information from their interactions. Say we also grant them directives to protect themselves and multiply. At what point do we say that the distinction between the image of man and man itself is moot?

lo_zamoyski 20 hours ago | parent [-]

> forming a judgement by a process of logic is precisely what these LLMs do, and we can see that clearly in chain-of-logic LLM processes

I don't know how you arrived at that conclusion. This is no mystery. LLMs work by making statistical predictions, and even the word "prediction" is loaded here. This is not inference. We cannot clearly see it is doing inference, as inference is not observable. What we observe is the product of a process that has a resemblance to the products of human reasoning. Your claim is effectively behaviorist.

> An LLM is for sure using evidence to get to conclusions, however.

Again, the certainty. No, it isn't "for sure". It is neither using evidence nor reasoning, for the reasons I gave. These presuppose intentionality, which is excluded by Turing machines and equivalent models.

> [w.r.t. "analyze"] I have seen LLM do this, precisely according to the definition above.

Again, you have not seen an LLM do this. You have seen an LLM produce output that might resemble this. Analysis likewise presupposes intentionality, because it involves breaking down concepts, and concepts are the very locus of intentionality. Without concepts, you don't get analysis. I cannot understate the centrality of concepts to intelligence. They're more important than inference and indeed presupposed by inference.

> Philosophically I would argue that it's impossible to know what these processes look like in the human mind, and so creating an equivalency (positive or negative) is an exercise in futility.

That's not a philosophical claim. It's a neuroscientific one that insists that the answer must be phrased in neuroscientific terms. Philosophically, we don't even need to know the mechanisms or processes or causes of human intelligence to know that the heart of human intelligence is intentionality. It's implicit in the definition of what intelligence is! If you deny intentionality, you subject yourself to a dizzying array of incoherence, beginning with the self-refuting consequence that you could not be making this argument against intentionality in the first place without intentionality.

> At what point do we say that the distinction between the image of man and man itself is moot?

Whether something is moot depends on the aim. What is your aim? If you aim is theoretical, which is to say the truth for its own sake, and to know whether something is A or something is B and whether A is B, then it is never moot. If your aim is practical and scoped, if you want some instrument that has utility indistinguishable from or superior to that of a human being in the desired effects that it produces, then sure, maybe the question is moot in that case. I don't care if my computer was fabricated by a machine or a human being. I care about the quality of the computer. But then, in the latter case, you're not really asking whether there is a distinction between man and the image of man (which, btw, already makes the distinction that for some reason you want to forget or deny, as the image of a thing is never the same as the thing). So I don't really understand the question. The use of the word "moot" seems like a category mistake here. Besides, the ability to distinguish two things is an epistemic question, not an ontological one.

pksebben 17 hours ago | parent [-]

Forming a judgement does not require that the internal process look like anything in particular, though. Nor does logic. What makes logic powerful is precisely that it is abstracted from the process that creates it - it is a formula that can be defined.

I ask the LLM to do some or another assessment. The LLM prints out the chain-of-thought (whether that moniker is accurate is academic - we can read the chain and see that at the very least, it follows a form recognizable as logic). At the end of the chain-of-thought, we are left with a final conclusion that the model has come to - a judgement. Whether the internal state of the machine looks anything like our own is irrelevant to these definitions, much like writing out a formalism (if A then B, if B then C, A implies C). Those symbols do not have any form save for the shape of them, but when used in accordance with the rules we have laid out regarding logic, they have meaning nonetheless.

I'd similarly push back against the idea that the LLM isn't using evidence - I routinely ask my LLMs to do so, and they search on the web, integrating the information gleaned into a cohesive writeup, and provide links so I can check their work. If this doesn't constitute "using evidence" then I don't know what does.

w.r.t. "analyze", I think you're adding some human-sauce to the definition. At least in common usage, we've used the term analyze to refer to algorithmic decoction of data for decades now - systems that we know for a fact have no intentionality other than directed by the user.

I think I can divine the place where our understandings diverge, and where we're actually on the same track. Per Dennet, I would agree with you that the current state of an LLM lacks intrinsic intention and thus certain related aspects of thought. Any intent must be granted by the user, at the moment.

However, it is on this point that I think we're truly diverging - whether it is possible for a machine to ever have intent. To the best of my understanding, animal intent traces it's roots to the biological imperative - and I think it's a bit of hubris to think that we can separate that from human intent. Now, I'm an empiricist before anything else, so I have to qualify this next part by saying it's a guess, but I suppose that all one needs to qualify for intent is a single spark - a directive that lives outside of the cognitive construct. For us, it lives in Maslow's hierarchy - any human intent can be traced back to some directive there. For a machine, perhaps all that's needed is to provide such a spark (along with a loop that would allow the machine to act without the prodding of the enter key).

I should apologize in advance, at this point, because I'm about to get even more pedantic. Still, I feel it relevant so let's soldier on...

As for whether the image of a thing is a thing, I ask this: is the definition of a thing, also that thing? When I use a phrase to define a chair, is the truth of the existence of that collection of atoms and energy contained within the word "chair", or my meaning in uttering it? Any idea that lives in words is constrained by the understanding of the speaker - so when we talk about things like consciousness and intentionality and reasoning we are all necessarily taking shortcuts with the actual Truth. It's for this reason that I'm not quite comfortable with laying out a solid boundary where empirical evidence cannot be built to back it up.

If I seem to be picking at the weeds, here, it's because I see this as an impending ethical issue. From what my meagre understanding can grok, there is a nonzero chance that we are going to be faced with determining the fate of a possibly conscious entity birthed from these machines in our lifetime. If we do not take the time to understand the thing and write it off as "just a machine", we risk doing great harm. I do not mean to say that I believe it is a foregone conclusion, but I think it right and correct that we be careful in examining our own presuppositions regarding the nature and scope of the thing. We have never had to question our understanding of consciousness in this way, so I worry that we are badly in need of practice.