Remix.run Logo
Pooge 2 hours ago

> Pretty sure that's wrong. The way it works is: we have this equation. It predicts where we expect such stuff to be in X seconds. In X seconds, we check it's indeed there. It's there: actual confirmation, not confirmation bias.

Exactly. My point is that since Einstein's theory, we know that Newton's Law is incomplete. Therefore proving that it was confirmation bias (i.e. that our equations just confirmed what we observed). Since we observed black holes, we knew that Newton's was incomplete as it couldn't fully explain their behaviors.

jraph 2 hours ago | parent [-]

> i.e. that our equations just confirmed what we observed

No, no, it's the opposite, and it's key! What we had been observing kept matching what the equations gave us "so far". Without cherry-picking, or refusing to see the cases where the model doesn't apply (consciously or not), which would have been confirmation bias.

We did, in fact, question the model as soon as we noticed it didn't apply.

Confirmation bias implies "cognitive blinkers", I don't think this happened in this Newton vs Einstein stuff.

But I agree the confirmation bias risk is not very far away. It's an issue in the general population, it's also likely a big issue in research.

Pooge 2 hours ago | parent [-]

Don't we start the equations after observing a phenomenon? It wouldn't make sense to try to explain something before observing it..

For example, after observing black holes we understood that Newton's was not enough to explain them. Thus we had to find another theory that explained our observations. Now with quantum computing we know that Einstein's theory is insufficient, too (not very knowledgeable on quantum physics myself, though)