Remix.run Logo
parpfish 4 hours ago

In theory, asking grad students and early career folks to run replications would be a great training tool.

But the problem isn’t just funding, it’s time. Successfully running a replication doesn’t get you a publication to help your career.

rtkwe 3 hours ago | parent | next [-]

That.. still requires funding. Even if your lab happens to have all the equipment required to replicate you're paying the grad student for their time spent on replicating this paper and you'll need to buy some supplies; chemicals, animal subjects, pay for shared equipment time, etc.

goalieca 3 hours ago | parent | prev | next [-]

Grad students don’t get to publish a thesis on reproduction. Everyone from the undergraduate research assistant to the tenured professor with research chairs are hyper focused on “publishing” as much “positive result” on “novel” work as possible

Kinrany 3 hours ago | parent | next [-]

Publishing a replication could be a prerequisite to getting the degree

The question is, how can universities coordinate to add this requirement and gain status from it

ihaveajob 3 hours ago | parent [-]

I think Arxiv and similar could contribute positively by listing replications/falsifications, with credit to the validating authors. That would be enough of an incentive for aspiring researchers to start making a dent.

soiltype 2 hours ago | parent | prev [-]

But that seems almost trivially solved. In software it's common to value independent verification - e.g. code review. Someone who is only focused on writing new code instead of careful testing, refactoring, or peer review is widely viewed as a shitty developer by their peers. Of course there's management to consider and that's where incentives are skewed, but we're talking about a different structure. Why wouldn't the following work?

A single university or even department could make this change - reproduction is the important work, reproduction is what earns a PhD. Or require some split, 20-50% novel work maybe is also expected. Now the incentives are changed. Potentially, this university develops a reputation for reliable research. Others may follow suit.

Presumably, there's a step in this process where money incentivizes the opposite of my suggestion, and I'm not familiar with the process to know which.

Is it the university itself which will be starved of resources if it's not pumping out novel (yet unreproducible) research?

DSMan195276 35 minutes ago | parent | next [-]

> Presumably, there's a step in this process where money incentivizes the opposite of my suggestion, and I'm not familiar with the process to know which.

> Is it the university itself which will be starved of resources if it's not pumping out novel (yet unreproducible) research?

Researchers apply for grants to fund their research, the university is generally not paying for it and instead they receive a cut of the grant money if it is awarded (IE. The grant covers the costs to the university for providing the facilities to do the research). If a researcher could get funding to reproduce a result then they could absolutely do it, but that's not what funds are usually being handed out for.

worik an hour ago | parent | prev [-]

> In software it's common to value independent verification - e.g. code review. Someone who is only focused on writing new code instead of careful testing, refactoring, or peer review is widely viewed as a shitty developer by their peers.

That is good practice

It is rare, not common. Managers and funders pay for features

Unreliable insecure software sells very well, so making reliable secure software is a "waste of money", generally

eks-reigh 3 hours ago | parent | prev | next [-]

You may well know this, but I get the sense that it isn’t necessarily common knowledge, so I want to spell it out anyway:

In a lot of cases, the salary for a grad student or tech is small potatoes next to the cost of the consumables they use in their work.

For example,I work for a lab that does a lot of sequencing, and if we’re busy one tech can use 10k worth of reagents in a week.

coryrc 3 hours ago | parent | prev | next [-]

Enough people will falsify the replication and pocket the money, taking you back to where you were in the first place and poorer for it. The loss of trust is an existential problem for the USA.

iugtmkbdfil834 4 hours ago | parent | prev [-]

Yeah, but doesn't publishing an easily falsifiable paper end one?

bnchrch 3 hours ago | parent | next [-]

One, it doesnt damage your reputation as much as one would think.

But two, and more importantly, no one is checking.

Tree falls in the forest, no one hears, yadi-yada.

godelski 2 hours ago | parent | next [-]

Here's a work from last year which was plagiarized. The rare thing about this work is it was submitted to ICLR, which opened reviews for both rejected and accepted works.

You'll notice you can click on author names and you'll get links to their various scholar pages but notably DBLP, which makes it easy to see how frequently authors publish with other specific authors.

Some of those authors have very high citation counts... in the thousands, with 3 having over 5k each (one with over 18k).

https://openreview.net/forum?id=cIKQp84vqN

iugtmkbdfil834 3 hours ago | parent | prev [-]

<< no one is checking.

I think this is the big part of it. There is no incentive to do it even when the study can be reproduced.

m-schuetz 2 hours ago | parent | prev | next [-]

The vast majority of papers is so insignifcant, nobody bothers to try and use and thereby replicate it.

parpfish 3 hours ago | parent | prev | next [-]

But the thing is… nobody is doing the replication to falsify it. And if the did, it wouldn’t be published because it’s a null result

wizzwizz4 4 hours ago | parent | prev | next [-]

Not in most fields, unless misconduct is evident. (And what constitutes "misconduct" is cultural: if you have enough influence in a community, you can exert that influence on exactly where that definitional border lies.) Being wrong is not, and should not be, a career-ending move.

iugtmkbdfil834 3 hours ago | parent [-]

If we are aiming for quality, then being wrong absolutely should be. I would argue that is how it works in real life anyway. What we quibble over is what is the appropriate cutoff.

rtkwe 2 hours ago | parent [-]

There's a big gulf between being wrong because you or a collaborator missed an uncontrolled confounding factor and falsifying or altering results. Science accepts that people sometimes make mistakes in their work because a) they can also be expected to miss something eventually and b) a lot of work is done by people in training in labs you're not directly in control of (collaborators). They already aim for quality and if you're consistently shown to be sloppy or incorrect when people try to use your work in their own.

The final bit is a thing I think most people miss when they think about replication. A lot of papers don't get replicated directly but their measurements do when other researchers try to use that data to perform their own experiments, at least in the more physical sciences this gets tougher the more human centric the research is. You can't fake or be wrong for long when you're writing papers about the properties of compounds and molecules. Someone is going to come try to base some new idea off your data and find out you're wrong when their experiment doesn't work. (or spend months trying to figure out what's wrong and finally double check the original data).

wizzwizz4 2 hours ago | parent [-]

In fields like psychology, though, you can be wrong for decades. If your result is foundational enough, and other people have "replicated" it, then most researchers will toss out contradictory evidence as "guess those people were an unrepresentative sample". This can be extremely harmful when, for instance, the prevailing view is "this demographic are just perverts" or "most humans are selfish thieves at heart, held back by perceived social consensus" – both examples where researcher misconduct elevated baseless speculation to the position of "prevailing understanding", which led to bad policy, which had devastating impacts on people's lives.

(People are better about this in psychology, now: schoolchildren are taught about some of the more egregious cases, even before university, and individual researchers are much more willing to take a sceptical view of certain suspect classes of "prevailing understanding". The fact that even I, a non-psychologist, know about this, is good news. But what of the fields whose practitioners don't know they have this problem?)

rtkwe an hour ago | parent [-]

Yeah like I said the soft validation by subsequent papers is more true in more baseline physical sciences because it involves fewer uncontrollable variables. That's why I mentioned 'hard' sciences in my post, messy humans are messy and make science waaay harder.

Telaneo 3 hours ago | parent | prev [-]

Not really, since nobody (for values of) ends up actually falsifying it, and if they do, it's years down the line.