Remix.run Logo
mjburgess 3 days ago

"Intelligence" is a metaphor used to describe LLMs (, AI) used by those who have never studied intelligence.

If you had studied intelligence as a science of systems which are intelligent (ie., animals, people, etc.) then this comparison would seem absurd to you; mendacious and designed to confound.

The desperation to find some scenario in which, at the most extreme superficial level, an intelligent agent "benchmarks like an LLM" is a pathology of thinking designed to lure the gullible into credulousness.

If an LLM is said to benchmark on arithmetic like a person doing math whilst being tortured, then the LLM cannot do math -- just as a person being tortured cannot. I cannot begin to think what this is supposed to show.

LLMs, and all statistical learners based on interpolating historical data, have a dramatic sensitivity to permuting their inputs such that they collapse in performance. A small permutation to the input is, if we must analogise, "like toturing a person to the point their mind ceases to function". Because these learners do not have representations of the underlying problem domain which are fit to the "natural, composable, general" structures of that domain ---- they are just fragmaents of text data put in a blender. You'll get performance only when that blender isnt being nudged.

The reason one needs to harm a person to a point they are profoundly disabled and cannot think, to get this kind of performance -- is that at this point, a person cannot be said to be using their mind at all.

This is why the analogy holds in a very superficial way: because LLMs do not analogise to functioning minds; they are not minds at all.

squidbeak 2 days ago | parent [-]

You seem to be replying to a completely different post. You'll see I didn't once use the term 'intelligence', so the reprimand you lead with about the use of that term is pretty odd.

The ramble that follows has its curiosities, not least the compulsion you have to demean or insult your 'gullible', 'credulous' opponents, but is otherwise far from any point. The contention of yours I was replying to was your curiously absolute statement that human performance doesn't degrade with the introduction of irrelevant information. I gave you instances any of us can relate to where it definitely does degrade. Rather than dispute my point, you've allowed some kind of 'extra information' to bounce you around irrelevancies from one tangent to the next - through torture, blenders, animals as systems, etc etc. What you've actually done, quite beautifully, is restate my point for me.

n4r9 2 days ago | parent | next [-]

I may not agree with you but I appreciate your efforts to call out demaning and absolutist language on HN. It really drags the discussion down.

mjburgess 20 hours ago | parent | prev [-]

So you strawman'd my claim about degradation of performance to one in which "substantial", "irrelevant" and "almost all cases" have no flexibility to circumscribe scenarios, so that i must be making a universal claim... And then you take issue with my reply?

Why would you think that I'd deny that you can't find scenarios in which performance substantially degrades? Would I not countenance toture? As in my reply?

My reply is against your presumption that an appropriate response to the spirit-and-plain-meaning of my argument is to "go and find another scenario". It is this presumption, when addressed, short-circuits this scenario-finding dialogue: In my reply I address the whole families of scenarios you are appealing to where we fail to function well and show why there existence remains irrelevant to our analysis of llms