| ▲ | akoboldfrying a day ago | ||||||||||||||||||||||
If the goal is to find the perfect matching in some maximum number of turns or less, it's possible to do even better by using a full game tree that minimises the maximum height of the tree ( = number of turns required), instead of using information/entropy as done here. Basically, using the entropy produces a game tree that minimises the number of steps needed in expectation -- but that tree could be quite unbalanced, having one or more low-probability leaves (perfect matchings) many turns away from the root. Such a leaf will randomly occur some small fraction of the time, meaning those games will be lost. For concreteness, a game requiring 6 bits of information to identify the perfect matching will take 6 steps on average, and may sometimes require many more; a minimax tree of height 7 will always be solved in at most 7 steps. So if you're only allowed 7 steps, it's the safer choice. | |||||||||||||||||||||||
| ▲ | codeflo 12 hours ago | parent | next [-] | ||||||||||||||||||||||
That's a good point, entropy is only a heuristic for the thing you actually want to optimize, worst-case guesses (though it's probably a very good heuristic). > Basically, using the entropy produces a game tree that minimises the number of steps needed in expectation It might be even worse than that for problems of this kind in general. You're essentially using a greedy strategy: you optimize early information gain. It's clear that this doesn't optimize the worst-case, but it might not optimize the expected number of steps either. I don't see why it couldn't be the case that an expected-steps-optimal strategy gains less information early on, and thus produces larger sets of possible solutions, but through some quirk those larger sets are easier to separate later. | |||||||||||||||||||||||
| ▲ | thaumasiotes a day ago | parent | prev [-] | ||||||||||||||||||||||
> For concreteness, a game requiring 6 bits of information to identify the perfect matching will take 6 steps on average, and may sometimes require many more I'm not following your logic. Consider the setup we actually have: 1. You get to ask a series of yes/no questions. (Ignoring the matchups.) 2. Each question can produce, in expectation, up to one bit of information. 3. In order to achieve this maximum expectation, it is mathematically necessary to use questions that invariably do produce exactly one bit of information, never more or less. If your questions do not have this property, they will in expectation produce less than the maximum amount of information, and your expected number of steps to the solution will increase, which contradicts your stated goal of minimizing that quantity. You get the minimum number of steps needed in expectation by always using questions with maximum entropy. Yes. But those questions never have any variation in the amount of entropy they produce; a maximum entropy strategy can never take more - or fewer - steps than average.¹ ¹ Unless the number of bits required to solve the problem is not an integer. Identifying one of three options requires 1.585 bits; in practice, this means that you'll get it after one question 1/3 of the time and after two questions the other 2/3 of the time. But identifying one of 128 options requires 7 bits, you'll always get it after 7 questions, and you'll never be able to get it after 6 questions. (Assuming you're using a strategy where the expected number of questions needed is 7.) | |||||||||||||||||||||||
| |||||||||||||||||||||||