Remix.run Logo
jltsiren 2 hours ago

I've seen this from both sides.

Sometimes the result is wrong, or it's not as big or as general as claimed. Or maybe the provided instructions are insufficient to replicate the work. But sometimes the attempt to replicate a result fails, because the person doing it does not understand the topic well enough.

Maybe they are just doing the wrong things, because their general understanding of the situation is incorrect. Maybe they fail to follow the instructions correctly, because they have subtle misunderstandings. Or maybe they are trying to replicate the result with data they consider similar, but which is actually different in an important way.

The last one is often a particularly difficult situation to resolve. If you understand the topic well enough, you may be able to figure out how the data is different and what should be changed to replicate the result. But that requires access to the data. Very often, one side has the data and another side the understanding, but neither side has both.

Then there is the question of time. Very often, the person trying to replicate the result has a deadline. If they haven't succeeded by then, they will abandon the attempt and move on. But the deadline may be so tight that the authors can't be reasonably expected to figure out the situation by then. Maybe if there is a simple answer, the authors can be expected to provide it. But if the issue looks complex, it may take months before they have sufficient time to investigate it. Or if the initial request is badly worded or shows a lack of understanding, it may not be worth dealing with. (Consider all the bad bug reports and support requests you have seen.)

godelski an hour ago | parent [-]

I definitely think all these are important, even if in different ways. For the subtle (and even not so subtle) misunderstandings it matters who misunderstands. For the most part, I don't think we should concern ourselves with non-experts. We do need science communicators, but this is a different job (I'm quite annoyed at those on HN who critique arxiv papers for being too complex while admitting they aren't researchers themselves). We write papers to communicate to peers, not the public. If we were to write to the latter each publication would have to be prepended by several textbooks worth of material. But if it is another expert misunderstanding, then I think there's something quite valuable there. IFF the other expert is acting in good faith (i.e. they are doing more than a quick read and actually taking their time with the work) then I think it highlights ambiguity. I think the best way to approach this is distinguish by how prolific the misunderstanding is. If it is uncommon, well... we're human and no matter how smart you are you'll produce mountains of evidence to the contrary (we all do stupid shit). But if the misunderstanding is prolific then we can be certain that ambiguity exists, and it is worth resolving. I've seen exactly what you've seen as well as misunderstandings leading to discoveries. Sometimes our idiocracy can be helpful lol.

But in any case, I don't know how we figure out which category of failures it is without it being published. If no one else reads it it substantially reduces the odds of finding the problem.

FWIW, I'm highly in favor of a low bar to publishing. The goal of publishing is to communicate to our peers. I'm not sure why we get so fixated on these things like journal prestige. That's missing the point. My bar is: 1) it is not obviously wrong, 2) it is not plagiarized (obviously or not), 3) it is useful to someone. We do need some filters, but there's already natural filters beyond the journals and conferences. I mean we're all frequently reading "preprints" already, right? I think one of the biggest mistakes we make is conflate publication with correctness. We can't prove correctness anywhere, science is more about the process of elimination. It's silly to think that the review process could provide correctness. It can (imperfectly) invalidate works, but not validate them. It isn't just the public that seems to have this misunderstanding...

jltsiren 28 minutes ago | parent [-]

Things are easier when you are writing to your peers within an established academic field. But all too often, the target audience includes people in neighboring fields. Then it can easily be that most people trying to replicate the work are non-experts.

For example, most of my work is in algorithmic bioinformatics, which is a small field. Computer scientists developing similar methods may want to replicate my work, but they often lack the practical familiarity with bioinformatics. Bioinformaticians trying to be early adopters may also try to replicate the work, but they are often not familiar with the theoretical aspects. Such a variety of backgrounds can be a fertile ground for misunderstandings.