| ▲ | wat10000 14 hours ago | ||||||||||||||||
It's suspicious when it lands on something that people might be biased towards. For example, you take the top five cards, and you get a royal flush of diamonds in ascending order. In theory, this sequence is no more or less probable than any other sequence being taken from a randomly shuffled deck. But given that this sequence has special significance to people, there's a very good reason to think that this indicates that the deck is not randomly shuffled. In theory terms, you can't just look at the probability of getting this result from a fair coin (or deck or whatever). You have to look at that probability, and the probability that the coin (deck etc.) is biased, and that a biased coin would produce the outcome you got. If you flip a coin that feels and appears perfectly ordinary and you get exactly 100 heads and 100 tails, you should still be pretty confident that it's unbiased. If you ask somebody else to flip a coin 200 times, and you can't actually see them, and you know they're lazy, and they come back and report exactly 100/100, that's a good indicator they didn't do the flips. | |||||||||||||||||
| ▲ | tshaddox 10 hours ago | parent [-] | ||||||||||||||||
> It's suspicious when it lands on something that people might be biased towards. Eh, this only makes sense if you're incorporating information about who set up the experiment in your statistical model. If you somehow knew that there's a 50% probability that you were given a fair coin and a 50% probability that you were given an unfair coin that lands on the opposite side of its previous flip 90% of the time, then yes, you could incorporate this sort of knowledge into your analysis of your single trial of 200 flips. | |||||||||||||||||
| |||||||||||||||||