Remix.run Logo
IgorPartola 3 days ago

I feel like these conversations really miss the mark: whether an LLM thinks or not is not a relevant question. It is a bit like asking “what color is an Xray?” or “what does the number 7 taste like?”

The reason I say this is because an LLM is not a complete self-contained thing if you want to compare it to a human being. It is a building block. Your brain thinks. Your prefrontal cortex however is not a complete system and if you somehow managed to extract it and wire it up to a serial terminal I suspect you’d be pretty disappointed in what it would be capable of on its own.

I want to be clear that I am not making an argument that once we hook up sensory inputs and motion outputs as well as motivations, fears, anxieties, desires, pain and pleasure centers, memory systems, sense of time, balance, fatigue, etc. to an LLM that we would get a thinking feeling conscious being. I suspect it would take something more sophisticated than an LLM. But my point is that even if an LLM was that building block, I don’t think the question of whether it is capable of thought is the right question.

nkrisc 3 days ago | parent | next [-]

> The reason I say this is because an LLM is not a complete self-contained thing if you want to compare it to a human being.

The AI companies themselves are the ones drawing the parallels to a human being. Look at how any of these LLM products are marketed and described.

djeastm 3 days ago | parent [-]

Is it not within our capacity on HN to ignore whatever the marketers say and speak to the underlying technology?

SiempreViernes 3 days ago | parent | next [-]

There was around 500 comments on the OpenAI + Disney story, so the evidence points to "no".

nkrisc 3 days ago | parent | prev [-]

Given the context of the article, why would you ignore that? It's important to discuss the underlying technology in the context in which it's being sold to the public at large.

mrwrong 2 days ago | parent [-]

they are discussing it?

ChuckMcM 3 days ago | parent | prev | next [-]

Why do people call is "Artificial Intelligence" when it could be called "Statistical Model for Choosing Data"?

"Intelligence" implies "thinking" for most people, just as "Learning" in machine learning implies "understanding" for most people. The algorithms created neither 'think' nor 'understand' and until you understand that, it may be difficult to accurately judge the value of the results produced by these systems.

james_marks 3 days ago | parent | next [-]

If we say “artificial flavoring”, we have a sense that it is an emulation of something real, and often a poor one.

Why, when we use the term for AI, do we skip over this distinction and expect it to be as good as the original—- or better?

That wouldn’t be artificial intelligence, it would just be the original artifact: “intelligence”.

resonious 3 days ago | parent | prev | next [-]

Actually I think the name is apt. It's artificial. It's like how an "artificial banana" isn't actually a banana. It doesn't have to be real thinking or real learning, it just has to look intelligent (which it does).

wongarsu 3 days ago | parent | prev | next [-]

The term was coined in 1955 to describe "learning or any other feature of intelligence" simulated by a machine [1]. The same proposal does list using natural language as one of the aspects of "the artificial intelligence problem"

It's not a perfect term, but we have been using it for seven full decades to include all of machine learning and plenty of things even less intelligent

1: https://www-formal.stanford.edu/jmc/history/dartmouth/dartmo...

IgorPartola 3 days ago | parent | prev [-]

How do you feel about Business Intelligence as a term?

ChuckMcM 3 days ago | parent [-]

Same way I feel about 'Military Intelligence' :-). Both of those phrases use the 'information gathering and analysis' definition of intelligence rather than the 'thinking' definition.

mjcohen 3 days ago | parent [-]

Like the old saying:

Military justice is to justice as military music is to music

tim333 2 days ago | parent | prev | next [-]

“what does the number 7 taste like?” is a nonsense question.

"how much thinking time did the LLMs get when getting gold in the maths olympiad" is not a nonsense question. Four and a half hours apparently. Different thing.

You could go on to ask if saying humans thinking about the problem is thinking but LLMs thinking about the problem is not thinking and if so why? Maybe only synapses count?

sublinear 3 days ago | parent | prev | next [-]

The part that's most infuriating is that we don't have to speculate at all. Any discussion or philosophizing beyond the literal computer science is simply misinformation.

There's absolutely no similarity between what computer hardware does and what a brain does. People will twist and stretch things and tickle the imagination of the naive layperson and that's just wrong. We seriously have to cut this out already.

Anthropomorphizing is dangerous even for other topics, and long understood to be before computers came around. Why do we allow this?

The way we talk about computer science today sounds about as ridiculous as invoking magic or deities to explain what we now consider high school physics or chemistry. I am aware that the future usually sees the past as primitive, but why can't we try to seem less dumb at least this time around?

Kim_Bruning 3 days ago | parent | next [-]

> There's absolutely no similarity between what computer hardware does and what a brain does.

But at very least there's also no similarity between what computer hardware does and what even the simplest of LLMs do. They don't run on eg. x86_64 , else qemu would be sufficient for inferencing.

pitaj 3 days ago | parent | prev [-]

Similarity of the hardware is absolutely irrelevant when we're talking about emergent behavior like "thought".

SiempreViernes 3 days ago | parent | prev | next [-]

You should take your complaints to OpenAI, who constantly write like LLMs think in the exact same sense as humans; here a random example:

> Large language models (LLMs) can be dishonest when reporting on their actions and beliefs -- for example, they may overstate their confidence in factual claims or cover up evidence of covert actions

tehjoker 3 days ago | parent [-]

They have a product to sell based on the idea AGI is right around the corner. You can’t trust Sam Altman as far as you can throw him.

Still, the sales pitch has worked to unlock huge liquidity for him so there’s that.

Still making predictions is a big part of what brains do though not the only thing. Someone wise said that LLM intelligence is a new kind of intelligence, like how animal intelligence is different from ours but is still intelligence but needs to be characterized to understand differences.

SiempreViernes 3 days ago | parent [-]

> Someone wise said that LLM intelligence is a new kind of intelligence

So long as you accept the slide ruler as a "new kind of intelligence" everything will probably work out fine, it's the Altmannian insistence that only the LLM is of the new kind that is silly.

wisty 3 days ago | parent | prev [-]

This is the underlying problem behind syncophantcy.

I saw a YouTube video about a investigative youtuber Eddy Burback who very easily convinced chat4 that he should cut off all contact with friends and family, move to a cabin in the desert, eat baby food, wrap himself in alfoil, etc just feeding his own (faked) mistakes and delusions. "What you are doing is important, trust your instincts".

Wven if AI could hypothetically be 100x as smart as a human under the hood, it still doesn't care. It doesn't do what it thinks it should, it doesn't do what it needs to do, it does what we train it to.

We train in humanities weaknesses and follies. AI can hypothetically exceed humanity in some respects, but in other respects it is a very hard to control power tool.

AI is optimised, and optimised functions always "hack" the evaluation function. In the case of AI, the evaluation function includes human flaws. AI is trained to tell us what we want to hear.

Elon Musk sees the problem, but his solution is to try to make it think more like him, and even if that succeeds it just magnifies his own weaknesses.

Has anyone read the book criticising Ray Dalio? He is a very successful hedge fund manager, who decided that he could solve the problem of finding a replacement by psychology evaluation and training people to think like him. But even his smartest employees didn't think like him, they just (reading between the lines) gamed his system. Their incentives weren't his incentives - he could demand radical honesty and integrity but that doesn't work so well when he would (of course) reward the people who agreed with him, rather than the people who would tell him he was screwing up. His organisation (apparently) became a bunch of even more radical syncopants due to his efforts to weed out syncophantcy.