▲ | AnotherGoodName a day ago | ||||||||||||||||||||||||||||||||||||||||||||||
Alphago (and stockfish that another commenter mentioned) still has to search ahead using a world model. The AI training just helps with the heuristics for pruning and evaluation of that search. The big fundamental blocker to a generic ‘can play any game’ ai is the manual implementation of the world model. If you read the alphago paper you’ll see ‘we started with nothing but an implementation of the game rules’. That’s the part we’re missing. It’s done by humans. | |||||||||||||||||||||||||||||||||||||||||||||||
▲ | moyix a day ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Note that MuZero did better than AlphaGo, without access to preprogrammed rules: https://en.wikipedia.org/wiki/MuZero | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
▲ | smokel a day ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Implementing a world model seems to be mostly solved by LLMs. Finding one that can be evaluated fast enough to actually solve games is extremely hard, for humans and AI alike. | |||||||||||||||||||||||||||||||||||||||||||||||
|