Remix.run Logo
fnord123 4 hours ago

> Stop citing single studies as definitive. They are not. Check if the ones you are reading or citing have been replicated.

And from the comments:

> From my experience in social science, including some experience in managment studies specifically, researchers regularly belief things – and will even give policy advice based on those beliefs – that have not even been seriously tested, or have straight up been refuted.

Sometimes people use fewer than one non replicatable studies. They invent studies and use that! An example is the "Harvard Goal Study" that is often trotted out at self-review time at companies. The supposed study suggests that people who write down their goals are more likely to achieve them than people who do not. However, Harvard itself cannot find such a study existing:

https://ask.library.harvard.edu/faq/82314

ChrisMarshallNY 3 hours ago | parent | next [-]

Check out the “Jick Study,” mentioned in Dopesick.

https://en.wikipedia.org/wiki/Addiction_Rare_in_Patients_Tre...

NedF an hour ago | parent [-]

[dead]

KingMob 4 hours ago | parent | prev [-]

Definitely ignore single studies, no matter how prestigious the journal or numerous the citations.

Straight-up replications are rare, but if a finding is real, other PIs will partially replicate and build upon it, typically as a smaller step in a related study. (E.g., a new finding about memory comes out, my field is emotion, I might do a new study looking at how emotion and your memory finding interact.)

If the effect is replicable, it will end up used in other studies (subject to randomness and the file drawer effect, anyway). But if an effect is rarely mentioned in the literature afterwards...run far, FAR away, and don't base your research off it.

A good advisor will be able to warn you off lost causes like this.