Remix.run Logo
anonymous908213 4 hours ago

Two concepts of intelligence and neither have remotely anything to do with real intelligence, academics sure like to play with words. I suppose this is how they justify their own existence; in the absence of being intelligent enough to contribute anything of value, they must instead engage in wordplay that obfuscates the meaning of words to the point nobody understands what the hell they're talking about, and confuses the lack of understanding of what they're talking about for the academics being more intelligent than the reader.

Intelligence, in the real world, is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4. This is deterministic, and it is why LLMs are not intelligent and can never be intelligent no matter how much better they get at superficially copying the form of output of intelligence. Probabilistic prediction is inherently incompatible with deterministic deduction. We're years into being told AGI is here (for whatever squirmy value of AGI the hype huckster wants to shill), and yet LLMs, as expected, still cannot do basic arithmetic that a child could do without being special-cased to invoke a tool call. How is it that we can go about ignoring reality for so long?

anonymous908213 4 hours ago | parent | next [-]

Addendum:

> With recent advances in AI, it becomes ever harder for proponents of intelligence-as-understanding to continue asserting that those tools have no clue and “just” perform statistical next-token prediction.

??????? No, that is still exactly what they do. The article then lists a bunch of examples in which this in trivially exactly what is happening.

> “The cat chased the . . .” (multiple connections are plausible, so how is that not understanding probability?)

It doesn't need to "understand" probability. "The cat chased the mouse" shows up in the distribution 10 times. "The cat chased the bird" shows up in the distribution 5 times. Absent any other context, with the simplest possible model, it now has a probability of 2/3 for the mouse and 1/3 for the bird. You can make the probability calculations as complex as you want, but how could you possibly trout this out as an example that an LLM completing this sentence isn't a matter of trivial statistical prediction? Academia needs an asteroid, holy hell.

[I originally edited this into my post, but two people had replied by then, so I've split it off into its own comment.]

n4r9 4 hours ago | parent [-]

One question is how do you know that you (or humans in general) aren't also just applying statistical language rules, but are convincing yourself of some underlying narrative involving logical rules? I don't know the answer to this.

anonymous908213 3 hours ago | parent [-]

We engage in many exercises in deterministic logic. Humans invented entire symbolic systems to describe mathematics without any prior art in a dataset. We apply these exercises in deterministic logic to reality, and reality confirms that our logical exercises are correct to within extremely small tolerances, allowing us to do mind-boggling things like trips to the moon, or engineering billions of transistors organized on a nanometer scale and making them mimick the appearance of human language by executing really cool math really quickly. None of this could have been achieved from scratch by probabilistic behaviour modelled on a purely statistical analysis of past information, which is immediately evident from the fact that, as mentioned, an LLM cannot do basic arithmetic, or any other deterministic logical exercise in which the answer cannot be predicted from already being in the training distribution, while we can. People will point to humans sometimes making mistakes, but that is because we take mental shortcuts to save energy. If you put a gun to our head and say "if you get this basic arithmetic problem wrong, you will die" we will reason long enough to get it right. People try prompting that with LLMs, and they still can't do it, funnily enough.

dcre 3 hours ago | parent | prev | next [-]

I just don’t think the question is about determinism and probability at all. When we think, our thoughts are influenced by any number of extra-logical factors, factors that operate on a level of abstraction totally alien to the logical content of thought. Things like chemical reactions in our brains or whether the sun is out or whether some sound distracts us or a smell reminds us of some memory. Whether these factors are deterministic or probabilistic is irrelevant — if anything the effect of these factors on our thinking is deterministic. What matters is that the mechanical process of producing thought is clearly influenced (perhaps entirely!) by non-rational factors. To me this means that any characterization of the essence of thinking that relies too heavily on its logical structure cannot be telling the whole story.

bdbdbdb 4 hours ago | parent | prev | next [-]

I keep coming back to this. The most recent version of chatgpt I tried was able to tell me how many letter 'r's were in a very long string of characters only by writing and executing a python script to do this. Some people say this is impressive, but any 5 year old could count the letters without knowing any python.

williamcotton 4 hours ago | parent | next [-]

How is counting not a technology?

The calculations are internal but they happen due to the orchestration of specific parts of the brain. That is to ask, why can't we consider our brains to be using their own internal tools?

I certainly don't think about multiplying two-digit numbers in my head in the same manner as when playing a Dm to a G7 chord that begs to resolve to a C!

armchairhacker 3 hours ago | parent | prev | next [-]

The 5-year old counts with an algorithm: they remember the current number (working memory, roughly analogous to context), scan the page and move their finger to the next letter. They were taught this.

It's not much different than ChatGPT being trained write a to Python script.

A notable difference is that it's much more efficient to teach something new to a 5-year old than fine-tune or retrain an LLM.

4 hours ago | parent | prev [-]
[deleted]
messe 4 hours ago | parent | prev | next [-]

> Probabilistic prediction is inherently incompatible with deterministic deduction

Prove that humans do it.

djoldman 4 hours ago | parent | prev | next [-]

Many people would require an intelligent entity to successfully complete tasks with non-deterministic outputs.

satisfice 4 hours ago | parent | prev [-]

Intelligence is not just about reasoning with logic. Computers are already made to do that.

The key thing is modeling. You must model a situation in a useful way in order to apply logic to it. And then there is intention, which guides the process.

anonymous908213 4 hours ago | parent [-]

Our computer programs execute logic, but cannot reason about it. Reasoning is the ability to dynamically consider constraints we've never seen before and then determine how those constraints would lead to a final conclusion. The rules of mathematics we follow are not programmed into our DNA; we learn them and follow them while our human-programming is actively running. But we can just as easily, at any point, make up new constraints and follow them to new conclusions. What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules.

satisfice 13 minutes ago | parent [-]

Executing logic is deductive reasoning. But, yes, I get it. There are also other layers of reasoning, and other forms. For instance, abductive and inductive inference.