| ▲ | benlivengood 15 hours ago | |
With an optimal way of determining fair splitting of gains like Shapley value[0] you can cooperate or defect with a probability that maximizes other participants expected value when everyone act fairly. The ultimatum game is the simplest example; N dollars of prize to split, N/2 is fair, accept with probability M / (N /2) where M is what's offered to you; the opponents maximum expected value comes from offering N/2; trying to offer less (or more) results in expected value to them < N/2. Trust can be built out of clearly describing how you'll respond in your own best interests in ways that achieve fairness, e.g. assuming the other parties will understand the concept of fairness and also act to maximize their expected value given their knowledge of how you will act. If you want to solve logically harder problems like one-shot prisoners dilemma, there are preliminary theories for how that can be done by proving things about the other participants directly. It won't work for humans, but maybe artificial agents. https://arxiv.org/pdf/1401.5577 | ||
| ▲ | jmward01 14 hours ago | parent [-] | |
Thanks. I'll take a look! | ||