▲ | svachalek a day ago | |
The state of CS papers is truly awful, as they're uniquely poised to be 100% reproducible. And yet my experience aligns with yours in that they very rarely are. | ||
▲ | 0cf8612b2e1e a day ago | parent | next [-] | |
Even more ridiculous is the number of papers that do not include code. Sure, maybe Google cannot offer an environment to replicate the underlying 1PB dataset, but for mortals, this is rarely a concern. Even better is when the paper says code will be released after publication, but they cannot be bothered to post it anywhere. | ||
▲ | justinnk a day ago | parent | prev [-] | |
I can second this, even availability of the code is still a problem. However, I would not say CS results are rarely reproducible, at least from the few experineces I had so far, but I heard of problematic cases from others. I guess it also differs between fields. I want to note there is hope. Contrary to what the root comment says, some publishers try to endorse reproducible results. See for example the ACM reproducibility initiative [1]. I have participated in this before and believe it is a really good initiative. Reproducing results can be very labor intensive though, loading a review system already struggling under massive floods of papers. And it is also not perfect, most of the time it is only ensured that the author-supplied code produces the presented results, but I still think more such initiatives are healthy. When you really want to ensure the rigor of a presented method, you have to replicate it, i.e., using a different programming language or so, which is really its own research endeavor. And there is also a place to publish such results in CS already [2]! (although I haven‘t tried this one). I imagine this may be especially interesting for PhD students just starting out in a new field, as it gives them the opportunity to learn while satisfying the expectation of producing papers. [1] https://www.acm.org/publications/policies/artifact-review-an... [2] https://rescience.github.io |