| ▲ | azan_ 3 hours ago | |
The problem is that reproducing something is really, really hard! Even if something doesn't reproduce in one experiment, it might be due to slight changes in some variables we don't even think about. There are some ways to circumvent it (e.g. team that's being reproduced cooperating with reproducing team and agreeing on what variables are important for the experiemnt and which are not), but it's really hard. The solutions you propose will unfortunately incentivize bad reproductions and we might reject theories that are actually true because of that. I think that one of the best way to fight the crisis is to actually improve quality of science - articles where authors reject to share their data should be automatically rejected. We should also move towards requiring preregistration with strict protocols for almost all studies. | ||
| ▲ | gowld a minute ago | parent | next [-] | |
Every time some easy "Reproducibility is hard / not worth the effort" I hear "The original research wasn't meaningful or valuable". | ||
| ▲ | gcr an hour ago | parent | prev | next [-] | |
That's fine! The tech report should talk about what the researchers tried and what didn't work. I think submissions to the reproducibility track shouldn't necessarily have to be positive to be accepted, and conversely, I don't think the presence of a negative reproduction should necessarily impact an author's career negatively. | ||
| ▲ | AnIrishDuck 2 hours ago | parent | prev [-] | |
Yeah, this feels like another reincarnation of the ancient "who watches the watchmen?" problem [1]. Time and time again we see that the incentives _really really_ matter when facing this problem; subtle changes can produce entirely new problems. 1. https://en.wikipedia.org/wiki/Quis_custodiet_ipsos_custodes%... | ||