Remix.run Logo
vld_chk 2 hours ago

In my mental model, the fundamental problem of reproducibility is that scientists have very hard time to find a penny to fund such research. No one wants to grant “hey I need $1m and 2 years to validate the paper from last year which looks suspicious”.

Until we can change how we fund science on the fundamental level; how we assign grants — it will be indeed very hard problem to deal with.

parpfish 2 hours ago | parent | next [-]

In theory, asking grad students and early career folks to run replications would be a great training tool.

But the problem isn’t just funding, it’s time. Successfully running a replication doesn’t get you a publication to help your career.

rtkwe an hour ago | parent | next [-]

That.. still requires funding. Even if your lab happens to have all the equipment required to replicate you're paying the grad student for their time spent on replicating this paper and you'll need to buy some supplies; chemicals, animal subjects, pay for shared equipment time, etc.

goalieca an hour ago | parent | prev | next [-]

Grad students don’t get to publish a thesis on reproduction. Everyone from the undergraduate research assistant to the tenured professor with research chairs are hyper focused on “publishing” as much “positive result” on “novel” work as possible

soiltype 21 minutes ago | parent | next [-]

But that seems almost trivially solved. In software it's common to value independent verification - e.g. code review. Someone who is only focused on writing new code instead of careful testing, refactoring, or peer review is widely viewed as a shitty developer by their peers. Of course there's management to consider and that's where incentives are skewed, but we're talking about a different structure. Why wouldn't the following work?

A single university or even department could make this change - reproduction is the important work, reproduction is what earns a PhD. Or require some split, 20-50% novel work maybe is also expected. Now the incentives are changed. Potentially, this university develops a reputation for reliable research. Others may follow suit.

Presumably, there's a step in this process where money incentivizes the opposite of my suggestion, and I'm not familiar with the process to know which.

Is it the university itself which will be starved of resources if it's not pumping out novel (yet unreproducible) research?

Kinrany an hour ago | parent | prev [-]

Publishing a replication could be a prerequisite to getting the degree

The question is, how can universities coordinate to add this requirement and gain status from it

ihaveajob an hour ago | parent [-]

I think Arxiv and similar could contribute positively by listing replications/falsifications, with credit to the validating authors. That would be enough of an incentive for aspiring researchers to start making a dent.

eks-reigh an hour ago | parent | prev | next [-]

You may well know this, but I get the sense that it isn’t necessarily common knowledge, so I want to spell it out anyway:

In a lot of cases, the salary for a grad student or tech is small potatoes next to the cost of the consumables they use in their work.

For example,I work for a lab that does a lot of sequencing, and if we’re busy one tech can use 10k worth of reagents in a week.

coryrc an hour ago | parent | prev | next [-]

Enough people will falsify the replication and pocket the money, taking you back to where you were in the first place and poorer for it. The loss of trust is an existential problem for the USA.

iugtmkbdfil834 2 hours ago | parent | prev [-]

Yeah, but doesn't publishing an easily falsifiable paper end one?

m-schuetz a few seconds ago | parent | next [-]

The vast majority of papers is so insignifcant, nobody bothers to try and use and thereby replicate it.

bnchrch 2 hours ago | parent | prev | next [-]

One, it doesnt damage your reputation as much as one would think.

But two, and more importantly, no one is checking.

Tree falls in the forest, no one hears, yadi-yada.

iugtmkbdfil834 an hour ago | parent [-]

<< no one is checking.

I think this is the big part of it. There is no incentive to do it even when the study can be reproduced.

parpfish 2 hours ago | parent | prev | next [-]

But the thing is… nobody is doing the replication to falsify it. And if the did, it wouldn’t be published because it’s a null result

Telaneo an hour ago | parent | prev | next [-]

Not really, since nobody (for values of) ends up actually falsifying it, and if they do, it's years down the line.

wizzwizz4 2 hours ago | parent | prev [-]

Not in most fields, unless misconduct is evident. (And what constitutes "misconduct" is cultural: if you have enough influence in a community, you can exert that influence on exactly where that definitional border lies.) Being wrong is not, and should not be, a career-ending move.

iugtmkbdfil834 an hour ago | parent [-]

If we are aiming for quality, then being wrong absolutely should be. I would argue that is how it works in real life anyway. What we quibble over is what is the appropriate cutoff.

rtkwe 40 minutes ago | parent [-]

There's a big gulf between being wrong because you or a collaborator missed an uncontrolled confounding factor and falsifying or altering results. Science accepts that people sometimes make mistakes in their work because a) they can also be expected to miss something eventually and b) a lot of work is done by people in training in labs you're not directly in control of (collaborators). They already aim for quality and if you're consistently shown to be sloppy or incorrect when people try to use your work in their own.

The final bit is a thing I think most people miss when they think about replication. A lot of papers don't get replicated directly but their measurements do when other researchers try to use that data to perform their own experiments, at least in the more physical sciences this gets tougher the more human centric the research is. You can't fake or be wrong for long when you're writing papers about the properties of compounds and molecules. Someone is going to come try to base some new idea off your data and find out you're wrong when their experiment doesn't work. (or spend months trying to figure out what's wrong and finally double check the original data).

godelski 29 minutes ago | parent | prev | next [-]

Funding is definitely a problem, but frankly reproduction is common. If you build off someone else's work (as is the norm) you need to reproduce first.

But without repetition being impactful to your career and the pressure to quickly and constantly push new work, a failure to reproduce is generally considered a reason to move on and tackle a different domain. It takes longer to trace the failure and the bar is higher to counter an existing work. It's much more likely you've made a subtle mistake. It's much more likely the other work had a subtle success. It's much more likely the other work simply wasn't written such that a work could be sufficiently reproduced.

I speak from experience too. I still remember in grad school I was failing to reproduce a work that was the main competitor to the work I had done (I needed to create comparisons). I emailed the author and got no response. Luckily my advisor knew the author's advisor and we got a meeting set up and I got the code. It didn't do what was claimed in the paper and the code structure wasn't what was described either. The result? My work didn't get published and we moved on. The other work was from a top 10 school and the choice was to burn a bridge and put a black mark on my reputation (from someone with far more merit and prestige) or move on.

That type of thing won't change in a reproduction system but needs an open system and open reproduction system as well. Mistakes are common and we shouldn't punish them. The only way to solve these issues is openness

jghn an hour ago | parent | prev | next [-]

Partially. There's also the issue that some sciences, like biology, are a lot messier & less predicatble than people like to believe.

poszlem 2 hours ago | parent | prev [-]

I often think we should movefrom peer review as "certification" to peer review as "triage", with replication determining how much trust and downstream weight a result earns over time.