Remix.run Logo
trick-or-treat 5 hours ago

> all LLM output is based on likelihood of one word coming after the next word based on the prompt.

Right but it has to reason about what that next word should be. It has to model the problem and then consider ways to approach it.

razorbeamz 5 hours ago | parent [-]

No, it does not reason anything. LLM "reasoning" is just an illusion.

When an LLM is "reasoning" it's just feeding its own output back into itself and giving it another go.

fenomas 4 hours ago | parent | next [-]

This is like saying chess engines don't actually "play" chess, even though they trounce grandmasters. It's a meaningless distinction, about words (think, reason, ..) that have no firm definitions.

trick-or-treat 4 hours ago | parent | next [-]

This exactly. The proof is in the pudding. If AI pudding is as good as (or better than) human pudding, and you continue to complain about it anyway... You're just being biased and unreasonable.

And by the way, I don't think it's surprising that so many people are being unreasonable on this issue, there is a lot at stake and it's implications are transformative.

razorbeamz 4 hours ago | parent | prev [-]

Chess engines are not a comparable thing. Chess is a solved game. There is always a mathematically perfect move.

trick-or-treat 3 hours ago | parent | next [-]

> Chess is a solved game. There is always a mathematically perfect move.

This is a good example of being confidently misinformed.

The best move is always a result of calculation. And the calculation can always go deeper or run on a stronger engine.

Scarblac 3 hours ago | parent | prev | next [-]

We know that chess can be solved, in theory. It absolutely isn't and probably will never be in practice. The necessary time and storage space doesn't exist.

sincerely 3 hours ago | parent | prev [-]

Chess is absolutely not a solved game, outside of very limited situations like endgames. Just because a best move exists does not mean we (or even an engine) know what it is

Scarblac 3 hours ago | parent | prev [-]

Is that so different from brains?

Even if it is, this sounds like "this submarine doesn't actually swim" reasoning.