▲ | tinco 4 days ago | |
I think you're putting too much weight on its intentions, it doesn't have intentions it is a mathematical model that is trained to give the most likely outcome. In almost all examples and explanations it has seen from chess games, each player would be trying to win, so it is simply the most logical thing for it to make a winning move. So I wouldn't expect explicitly prompting it to win to improve its performance by much if at all. The reverse would be interesting though, if you would prompt it to make losing/bad moves, would it be effective in doing so, and would the moves still be mostly legal? That might reveal a bit more about how much relies on concepts it's seen before. | ||
▲ | graypegg 3 days ago | parent | next [-] | |
Might also be interesting to see if mentioning a target ELO score actually works over enough simulated games. I can imagine there might be regular mentions of a player's ELO score near their match history in the training data. That way you're trying to emulate cases where someone is trying, but isn't very good yet, versus trying to emulate cases where someone is clearly and intentionally losing which is going to be orders of magnitude less common in the training data. (And I also would bet "losing" is also a vector/token too closely tied to ANY losing game, but those players were still putting up a fight to try and win the game. Could still drift towards some good moves!) | ||
▲ | 3 days ago | parent | prev [-] | |
[deleted] |