▲ | _heimdall 3 days ago | |||||||
I'm spit balling here so definitely take this with a grain of salt. If I only see legal moves, I may not think outside the box come up with moves other than what I already saw. Humans run into this all the time, we see things done a certain and effectively learn that that's just how to do it and we don't innovate. Said differently, if the generative AI isn't actually being generative at all, meaning its just predicting based on the training set, it could be providing only legal moves without ever learning or understanding the rules of the game. | ||||||||
▲ | adelineJoOs 3 days ago | parent [-] | |||||||
I am not a ML person, and know there is an mathematical explanation for what I am about to write, but here comes my informal reasoning: I fear this is not the case: 1) Either, the LLM (or other forms of deep neural networks) can reproduce exactly what it saw, but nothing new (then it would only produce legal moves, if it was trained on only legal ones) 2) Or, the LLM can produce moves that it did not exactly see, by outputting the "most probable" looking move in that situation (which it never has seen before). In effect, this is combining different situations and their output into a new output. As a result of this "mixing", it might output an illegal move (= the output move is illegal in this new situation), despite having been trained on only legal moves. In fact, I am not even sure if the deep neuronal networks we use in practice even can replicate their training data exactly - it seems to me that there is some kind of compression going on by embedding knowledge into the network, which will come with a loss. I am deeply convinced that LLMs will never be exact technology (but LLMs + other technology like proof assistants or compilers might be) | ||||||||
|