Remix.run Logo
qsort 3 hours ago

This is a fantastic idea, I used to play MtG competitively and a strong artificial opponent was always something I'd have loved.

The issue I see is that you'd need a huge amount of games to tell who's better (you need that between humans too, the game is very high variance.)

Another problem is that giving a positional evaluation to count mistakes is hard because MtG, in addition to having randomness, has private information. It could be rational for both players to believe they're currently winning even if they're both perfect bayesians. You'd need to have something that approximates "this is the probability of winning the game from this position, given all the information I have," which is almost certainly asymmetric and much more complicated than the equivalent for a game with randomness but not private information such as backgammon.

GregorStocks 3 hours ago | parent [-]

You wouldn't really need a _ton_ of games to get plausible data, but unfortunately today each game costs real money - typically a dollar or more with my current harness, though I'm hoping to optimize it and of course I expect model costs to continue to decline over time. But even reasonably-expensive models today are making tons of blunders that a tournament grinder wouldn't.

I'm not trying to compute a chess-style "player X was at 0.4 before this move and at 0.2 afterwards, so it was a -0.2 blunder", but I do have "blunder analysis" where I just ask Opus to second-guess every decision after the game is over - there's a bit more information on the Methodology page. So then you can compare models by looking at how often they blunder, rather than the binary win/loss data. If you look at individual games you can jump to the "blunders" on the timeline - most of the time I agree with Opus's analysis.