▲ | lukeschlather 3 days ago | |||||||||||||||||||||||||
> (a) the LLM does reason through the rules and understands what moves are legal or (b) was trained on a large set of legal moves and therefore only learned to make legal moves. How can you learn to make legal moves without understanding what moves are legal? | ||||||||||||||||||||||||||
▲ | _heimdall 3 days ago | parent | next [-] | |||||||||||||||||||||||||
I'm spit balling here so definitely take this with a grain of salt. If I only see legal moves, I may not think outside the box come up with moves other than what I already saw. Humans run into this all the time, we see things done a certain and effectively learn that that's just how to do it and we don't innovate. Said differently, if the generative AI isn't actually being generative at all, meaning its just predicting based on the training set, it could be providing only legal moves without ever learning or understanding the rules of the game. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | ramraj07 3 days ago | parent | prev [-] | |||||||||||||||||||||||||
I think they’ll acknowledge these models are truly intelligent only when the LLMs also irrationally go circles around logic to insist LLMs are statistical parrots. | ||||||||||||||||||||||||||
|