Remix.run Logo
famouswaffles 4 hours ago

>Intelligence is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4.

Human Intelligence is clearly not logic based so I'm not sure why you have such a definition.

>and yet LLMs, as expected, still cannot do basic arithmetic that a child could do without being special-cased to invoke a tool call.

One of the most irritating things about these discussions is proclamations that make it pretty clear you've not used these tools in a while or ever. Really, when was the last time you had LLMs try long multi-digit arithmetic on random numbers ? Because your comment is just wrong.

>What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules.

Good thing LLMs can handle this just fine I guess.

Your entire comment perfectly encapsulates why symbolic AI failed to go anywhere past the initial years. You have a class of people that really think they know how intelligence works, but build it that way and it fails completely.

anonymous908213 4 hours ago | parent [-]

> One of the most irritating things about these discussions is proclamations that make it pretty clear you've not used these tools in a while or ever. Really, when was the last time you had LLMs try long multi-digit arithmetic on random numbers ? Because your comment is just wrong.

They still make these errors on anything that is out of distribution. There is literally a post in this thread linking to a chat where Sonnet failed a basic arithmetic puzzle: https://news.ycombinator.com/item?id=47051286

> Good thing LLMs can handle this just fine I guess.

LLMs can match an example at exactly that trivial level because it can be predicted from context. However, if you construct a more complex example with several rules, especially with rules that have contradictions and have specified logic to resolve conflicts, they fail badly. They can't even play Chess or Poker without breaking the rules despite those being extremely well-represented in the dataset already, nevermind a made-up set of logical rules.

famouswaffles 4 hours ago | parent [-]

>They still make these errors on anything that is out of distribution. There is literally a post in this thread linking to a chat where Sonnet failed a basic arithmetic puzzle: https://news.ycombinator.com/item?id=47051286

I thought we were talking about actual arithmetic not silly puzzles, and there are many human adults that would fail this, nevermind children.

>LLMs can match an example at exactly that trivial level because it can be predicted from context. However, if you construct a more complex example with several rules, especially with rules that have contradictions and have specified logic to resolve conflicts, they fail badly.

Even if that were true (Have you actually tried?), You do realize many humans would also fail once you did all that right ?

>They can't even reliably play Chess or Poker without breaking the rules despite those extremely well-represented in the dataset already, nevermind a made-up set of logical rules.

LLMs can play chess just fine (99.8 % legal move rate, ~1800 Elo)

https://arxiv.org/abs/2403.15498

https://arxiv.org/abs/2501.17186

https://github.com/adamkarvonen/chess_gpt_eval

runarberg 3 hours ago | parent [-]

I still have not been convinced otherwise that LLMs are just super fancy (and expensive) curve fitting algorithms.

I don‘t like to throw the word intelligence around, but when we talk about intelligence we are usually talking about human behavior. And there is nothing human about being extremely good at curve fitting in multi parametric space.