▲ | throw310822 3 days ago | |
> For one, gpt-3.5-turbo-instruct rarely suggests illegal moves, even in the late game. This requires “understanding” chess. If this doesn’t convince youI encourage you to write a program that can take strings like 1. e4 d5 2. exd5 Qxd5 3. Nc3 and then say if the last move was legal. This alone should put to rest all the arguments that LLMs lack a world model, that they're just manipulating words, that they're just spitting probabilistic answers, and so on. |