Remix.run Logo
og_kalu 3 days ago

If it displays the outwards appearances of reasoning then it is reasoning. We don't evaluate humans any differently. There's no magic intell-o-meter that can detect the amount of intelligence flowing through a brain.

Anything else is just an argument of semantics. The idea that there is "true" reasoning and "fake" reasoning but that we can't tell the latter apart from the former is ridiculous.

You can't eat your cake and have it. Either "fake reasoning" is a thing and can be distinguished or it can't and it's just a made up distinction.

suddenlybananas 3 days ago | parent [-]

If I have a calculator with a look-up table of all additions of natural numbers under 100, the calculator can "appear" to be adding despite the fact it is not.

sourcepluck 3 days ago | parent | next [-]

Yes, indeed. Bullets know how to fly, and my kettle somehow knows that water boils at 373.15K! There's been an explosion of intelligence since the LLMs came about :D

og_kalu 3 days ago | parent [-]

Bullets don't have the outward appearance of flight. They follow the motion of projectiles and look it. Finding the distinction is trivial.

The look up table is the same. It will fall apart with numbers above 100. That's the distinction.

People need to start bringing up the supposed distinction that exists with LLMs instead of nonsense examples that don't even pass the test outlined.

int_19h 3 days ago | parent | prev | next [-]

This argument would hold up if LMs were large enough to hold a look-up table of all possible valid inputs that they can correctly respond to. They're not.

og_kalu 3 days ago | parent | prev [-]

Until you ask it to add number above 100 and it falls apart. That is the point here. You found a distinction. If you can't find one then you're arguing semantics. People who say LLMs can't reason are yet to find a distinction that doesn't also disqualify a bunch of humans.