Remix.run Logo
stuaxo 4 days ago

I hate the use of words like "understand" in these conversations.

The system understands nothing, it's anthropomorphising it to say it does.

trashtester 4 days ago | parent | next [-]

I have the same conclusion, but for the opposite reason.

It seems like many people tend to use the word "understand" to that not only does someone believe that a given move is good, they also belive that this knowledge comes from a rational evaluation.

Some attribute this to a non-material soul/mind, some to quantum mechanics or something else that seems magic, while others never realized the problem with such a belief in the first place.

I would claim that when someone can instantly recognize good moves in a given situation, it doesn't come from rationality at all, but from some mix of memory and an intuition that has been build by playing the game many times, with only tiny elements of actual rational thought sprinkled in.

This even holds true when these people start to calculate. It is primarily their intuition that prevens them from spending time on all sorts of unlikely moves.

And this intuition, I think, represents most of their real "understanding" of the game. This is quite different from understanding something like a mathematical proof, which is almost exclusively inducive logic.

And since "understand" so often is associated with rational inductive logic, I think the proper term would be to have "good intuition" when playing the game.

And this "good intuition" seems to me precisely the kind of thing that is trained within most neural nets, even LLM's. (Q*, AlphaZero, etc also add the ability to "calculate", meaning traverse the search space efficiently).

If we wanted to measure how good this intuition is compared to human chess intuition, we could limit an engine like AlphaZero to only evaluate the same number of moves per second that good humans would be able to, which might be around 10 or so.

Maybe with this limitation, the engine wouldn't currently be able to beat the best humans, but even if it reaches a rating of 2000-2500 this way, I would say it has a pretty good intuitive understanding.

Sharlin 4 days ago | parent | prev | next [-]

Trying to appropriate perfectly well generalizable terms as "something that only humans do" brings zero value to a conversation. It's a "god in the gaps" argument, essentially, and we don't exactly have a great track record of correctly identifying things that are uniquely human.

fao_ 4 days ago | parent [-]

There's very literally currently a whole wealth of papers proving that LLMs do not understand, cannot reason, and cannot perform basic kinds of reasoning that even a dog can perform. But, ok.

wizzwizz4 3 days ago | parent | next [-]

There's a whole wealth of papers proving that LLMs do not understand the concepts they write about. That doesn't mean they don't understand grammar – which (as I've claimed since the GPT-2 days) we should, theoretically, expect them to "understand". And what is chess, but a particularly sophisticated grammar?

TeMPOraL 3 days ago | parent | prev [-]

There's very literally currently a whole wealth of papers proving the opposite, too, so ¯\_(ツ)_/¯.

int_19h 3 days ago | parent | prev [-]

The whole point of this exercise is to understand what "understand" even means. Because we really don't have a good definition for this, and until we do, statements like "the system understands nothing" are vacuous.