| ▲ | eesmith 9 hours ago | |||||||||||||||||||||||||||||||
These sorts of problems assume that actions have no consequences beyond the immediate decision. These tests, run in places which have higher long-term expectation of social connections, give different results than in the US. In the world where <50% press blue, you know that everyone alive (the red pushers) would save themselves rather than take a risk helping you or those who aren't clever at game theory problems. I don't want to live in that world, so blue for me. And it's the fault of everyone who pressed red should I die. | ||||||||||||||||||||||||||||||||
| ▲ | bennettnate5 9 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||
This doesn't seem to be a game that tries to be particularly clever--one button could kill you, the other certainly won't. Trusting that nearly everyone will avoid pressing the button that could kill them seems a reasonable assumption, and it's not necessarily an indication of a lack of altruism. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
| ▲ | rayiner 9 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||
That knowledge isn't a "consequence" of the game. It's a symptom of a fact that's knowable a priori. Running the game doesn't make it true; running the game merely reveals something that was already true. | ||||||||||||||||||||||||||||||||