| ▲ | grayhatter a day ago |
| > The reviewer is not a proofreader, they are checking the rigour and relevance of the work, which does not rest heavily on all of the references in a document. I've always assumed peer review is similar to diff review. Where I'm willing to sign my name onto the work of others. If I approve a diff/pr and it takes down prod. It's just as much my fault, no? > They are also assuming good faith. I can only relate this to code review, but assuming good faith means you assume they didn't try to introduce a bug by adding this dependency. But I would should still check to make sure this new dep isn't some typosquatted package. That's the rigor I'm responsible for. |
|
| ▲ | dilawar a day ago | parent | next [-] |
| > I've always assumed peer review is similar to diff review. Where I'm willing to sign my name onto the work of others. If I approve a diff/pr and it takes down prod. It's just as much my fault, no? Ph.D. in neuroscience here. Programmer by trade. This is not true. Less you know about most peer revies is better. The better peer reviews are also not this 'thorough' and no one expects reviewers to read or even check references. Unless they are citing something they are familiar with and you are using it wrong then they will likely complain. Or they find some unknown citations very relevant to their work, they will read. I don't have a great analogy to draw here. peer review is usually a thankless and unpaid work so there is unlikely to be any motivation for fraud detection unless it somehow affects your work. |
| |
| ▲ | wpollock a day ago | parent [-] | | > The better peer reviews are also not this 'thorough' and no one expects reviewers to read or even check references. Checking references can be useful when you are not familiar with the topic (but must review the paper anyway). In many conference proceedings that I have reviewed for, many if not most citations were redacted so as to keep the author anonymous (citations to the author's prior work or that of their colleagues). LLMs could be used to find prior work anyway, today. |
|
|
| ▲ | tpoacher a day ago | parent | prev | next [-] |
| This is true, but here the equivalent situation is someone using a greek question mark (";") instead of a semicolon (";"), and you as a code reviewer are only expected to review the code visually and are not provided the resources required to compile the code on your local machine to see the compiler fail. Yes in theory you can go through every semicolon to check if it's not actually a greek question mark; but one assumes good faith and baseline competence such that you as the reviewer would generally not be expected to perform such pedantic checks. So if you think you might have reasonably missed greek question marks in a visual code review, then hopefully you can also appreciate how a paper reviewer might miss a false citation. |
| |
| ▲ | scythmic_waves a day ago | parent | next [-] | | > as a code reviewer [you] are only expected to review the code visually and are not provided the resources required to compile the code on your local machine to see the compiler fail. As a PR reviewer I frequently pull down the code and run it. Especially if I'm suggesting changes because I want to make sure my suggestion is correct. Do other PR reviewers not do this? | | |
| ▲ | dataflow a day ago | parent | next [-] | | I don't commonly do this and I don't know many people who do this frequently either. But it depends strongly on the code, the risks, the gains of doing so, the contributor, the project, the state of testing and how else an error would get caught (I guess this is another way of saying "it depends on the risks"), etc. E.g. you can imagine that if I'm reviewing changes in authentication logic, I'm obviously going to put a lot more effort into validation than if I'm reviewing a container and wondering if it would be faster as a hashtable instead of a tree. > because I want to make sure my suggestion is correct. In this case I would just ask "have you already also tried X" which is much faster than pulling their code, implementing your suggestion, and waiting for a build and test to run. | |
| ▲ | tpoacher a day ago | parent | prev | next [-] | | I do too, but this is a conference, I doubt code was provided. And even then, what you're describing isn't review per se, it's replication. In principle there are entire journals that one can submit replication reports to, which count as actual peer reviewable publications in themselves. So one needs to be pragmatic with what is expected from a peer review (especially given the imbalance between resources invested to create one versus the lack of resources offered and lack of any meaningful reward) | | |
| ▲ | Majromax a day ago | parent [-] | | > I do too, but this is a conference, I doubt code was provided. Machine learning conferences generally encourage (anonymized) submission of code. However, that still doesn't mean that replication is easy. Even if the data is also available, replication of results might require impractical levels of compute power; it's not realistic to ask a peer reviewer to pony up for a cloud account to reproduce even medium-scale results. |
| |
| ▲ | lesam a day ago | parent | prev | next [-] | | If there’s anything I would want to run to verify, I ask the author to add a unit test. Generally, the existing CI test + new tests in the PR having run successfully is enough. I might pull and run it if I am not sure whether a particular edge case is handled. Reviewers wanting to pull and run many PRs makes me think your automated tests need improvement. | |
| ▲ | Terr_ a day ago | parent | prev | next [-] | | I don't, but that's because ensuring the PR compiles and passes old+new automated tests is an enforced requirement before it goes out. So running it myself involves judging other risks, much higher-level ones than bad unicode characters, like the GUI button being in the wrong place. | |
| ▲ | grayhatter a day ago | parent | prev | next [-] | | > Do other PR reviewers not do this? Some do, many, (like peer reviewers), are unable to consider the consequences of their negligence. But it's always a welcome reminder that some people care about doing good work. That's easy to forget browsing HN, so I appreciate the reminder :) | |
| ▲ | vkou a day ago | parent | prev [-] | | > Do other PR reviewers not do this? No, because this is usually a waste of time, because CI enforces that the code and the tests can run at submission time. If your CI isn't doing it, you should put some work in to configure it. If you regularly have to do this, your codebase should probably have more tests. If you don't trust the author, you should ask them to include test cases for whatever it is that you are concerned about. |
| |
| ▲ | grayhatter a day ago | parent | prev | next [-] | | > This is true, but here the equivalent situation is someone using a greek question mark (";") instead of a semicolon (";"), No it's not. I think you're trying to make a different point, because you're using an example of a specific deliberate malicious way to hide a token error that prevents compilation, but is visually similar. > and you as a code reviewer are only expected to review the code visually and are not provided the resources required to compile the code on your local machine to see the compiler fail. What weird world are you living in where you don't have CI. Also, it's pretty common I'll test code locally when reviewing something more complex, more complex, or more important, if I don't have CI. > Yes in theory you can go through every semicolon to check if it's not actually a greek question mark; but one assumes good faith and baseline competence such that you as the reviewer would generally not be expected to perform such pedantic checks. I don't, because it won't compile. Not because I assume good faith. References and citations are similar to introducing dependencies. We're talking about completely fabricated deps. e.g. This engineer went on npm and grabbed the first package that said left-pad but it's actually a crypto miner. We're not talking about a citation missing a page number, or publication year. We're talking about something that's completely incorrect, being represented as relevant. > So if you think you might have reasonably missed greek question marks in a visual code review, then hopefully you can also appreciate how a paper reviewer might miss a false citation. I would never miss this, because the important thing is code needs to compile. If it doesn't compile, it doesn't reach the master branch. Peer review of a paper doesn't have CI, I'm aware, but it's also not vulnerable to syntax errors like that. A paper with a fake semicolon isn't meaningfully different, so this analogy doesn't map to the fraud I'm commenting on. | | |
| ▲ | tpoacher a day ago | parent [-] | | you have completely missed the point of the analogy. breaking the analogy beyond the point where it is useful by introducing non-generalising specifics is not a useful argument. Otherwise I can counter your more specific non-generalising analogy by introducing little green aliens sabotaging your imaginary CI with the same ease and effect. | | |
| ▲ | grayhatter a day ago | parent [-] | | I disagree you could do that and claim to be reasonable. But I agree, because I'd rather discuss the pragmatics and not bicker over the semantics about an analogy. Introducing a token error, is different from plagiarism, no? Someone wrote code that can't compile, is different from someone "stealing" proprietary code from some company, and contributing it to some FOSS repo? In order to assume good faith, you also need to assume the author is the origin. But that's clearly not the case. The origin is from somewhere else, and the author that put their name on the paper didn't verify it, and didn't credit it. | | |
| ▲ | tpoacher a day ago | parent [-] | | Sure but the focus here is on the reviewer not the author. The point is what is expected as reasonable review before one can "sign their name on it". "Lazy" (or possibly malicious) authors will always have incentives to cut corners as long as no mechanisms exist to reject (or even penalise) the paper on submission automatically. Which would be the equivalent of a "compiler error" in the code analogy. Effectively the point is, in the absence of such tools, the reviewer can only reasonably be expected to "look over the paper" for high-level issues; catching such low-level issues via manual checks by reviewers has massively diminishing returns for the extra effort involved. So I don't think the conference shaming the reviewers here in the absence of providing such tooling is appropriate. |
|
|
| |
| ▲ | xvilka a day ago | parent | prev [-] | | Code correctness should be checked automatically with the CI and testsuite. New tests should be added. This is exactly what makes sure these stupid errors don't bother the reviewer. Same for the code formatting and documentation. | | |
| ▲ | merely-unlikely a day ago | parent | next [-] | | This discussion makes me think peer reviews need more automated tooling somewhat analogous to what software engineers have long relied on. For example, a tool could use an LLM to check that the citation actually substantiates the claim the paper says it does, or else flags the claim for review. | | |
| ▲ | noitpmeder a day ago | parent | next [-] | | I'd go one further and say all published papers should come with a clear list of "claimed truths", and one is only able to cite said paper if they are linking in to an explicit truth. Then you can build a true hierarchy of citation dependencies, checked 'statically', and have better indications of impact if a fundamental truth is disproven, ... | | |
| ▲ | vkou a day ago | parent [-] | | Have you authored a lot of non-CS papers? Could you provide a proof of concept paper for that sort of thing? Not a toy example, an actual example, derived from messy real-world data, in a non-trivial[1] field? --- [1] Any field is non-trivial when you get deep enough into it. |
| |
| ▲ | alexcdot a day ago | parent | prev [-] | | hey, i'm a part of the gptzero team that built automated tooling, to get the results in that article! totally agree with your thinking here, we can't just give this to an LLM, because of the need to have industry-specific standards for what is a hallucination / match, and how to do the search |
| |
| ▲ | thfuran a day ago | parent | prev [-] | | What exactly is the analogy you’re suggesting, using LLMs to verify the citations? | | |
| ▲ | tpoacher a day ago | parent [-] | | not OP, but that wouldn't really be necessary. One could submit their bibtex files and expect bibtex citations to be verifiable using a low level checker. Worst case scenario if your bibtex citation was a variant of one in the checker database you'd be asked to correct it to match the canonical version. However, as others here have stated, hallucinated "citations" are actually the lesser problem. Citing irrelevant papers based on a fly-by reference is a much harder problem; this was present even before LLMs, but this has now become far worse with LLMs. | | |
| ▲ | thfuran a day ago | parent [-] | | Yes, I think verifying mere existence of the cited paper barely moves the needle. I mean, I guess automated verification of that is a cheap rejection criterion, but I don’t think it’s overall very useful. | | |
| ▲ | alexcdot a day ago | parent [-] | | really good point. one of the cofounders of gptzero here! the tool gptzero used in the article also detects if the citation supports the claim too, if you scroll to "cited information accuracy" here: https://app.gptzero.me/documents/1641652a-c598-453f-9c94-e0b... this is still in beta because its a much harder problem for sure, since its hard to determine if a 40 page paper supports a claims (if the paper claims X is computationally intractable, does that mean algorithms to compute approximate X are slow?) |
|
|
|
|
|
|
| ▲ | pron a day ago | parent | prev | next [-] |
| That is not, cannot be, and shouldn't be, the bar for peer review. There are two major differences between it and code review: 1. A patch is self-contained and applies to a codebase you have just as much access to as the author. A paper, on the other hand, is just the tip of the iceberg of research work, especially if there is some experiment or data collection involved. The reviewer does not have access to, say, videos of how the data was collected (and even if they did, they don't have the time to review all of that material). 2. The software is also self-contained. That's "prodcution". But a scientific paper does not necessarily aim to represent scientific consensus, but a finding by a particular team of researchers. If a paper's conclusions are wrong, it's expected that it will be refuted by another paper. |
| |
| ▲ | grayhatter a day ago | parent [-] | | > That is not, cannot be, and shouldn't be, the bar for peer review. Given the repeatability crisis I keep reading about, maybe something should change? > 2. The software is also self-contained. That's "prodcution". But a scientific paper does not necessarily aim to represent scientific consensus, but a finding by a particular team of researchers. If a paper's conclusions are wrong, it's expected that it will be refuted by another paper. This is a much, MUCH stronger point. I would have lead with this because the contrast between this assertion, and my comparison to prod is night and day. The rules for prod are different from the rules of scientific consensus. I regret losing sight of that. | | |
| ▲ | garden_hermit a day ago | parent | next [-] | | > Given the repeatability crisis I keep reading about, maybe something should change? The replication crisis — assuming that it is actually a crisis — is not really solvable with peer review. If I'm reviewing a psychology paper presenting the results of an experiment, I am not able to re-conduct the entire experiment as presented by the authors, which would require completely changing my lab, recruiting and paying participants, and training students & staff. Even if I did this, and came to a different result than the original paper, what does it mean? Maybe I did something wrong in the replication, maybe the result is only valid for certain populations, maybe inherent statistical uncertainty means we just get different results. Again, the replication crisis — such that it exists — is not the result of peer review. | |
| ▲ | hnfong a day ago | parent | prev [-] | | IMHO what should change is we stop putting "peer reviewed" articles on a pedestal. Even if peer review is as rigorous as code reviewed (the former which is usually unpaid), we all know that reviewed code still has bugs, and a programmer would be nuts to go around saying "this code is reviewed by experts, we can assume it's bug free, right?" But there are too many people who are just assuming peer reviewed articles means they're somehow automatically correct. | | |
| ▲ | vkou a day ago | parent [-] | | > IMHO what should change is we stop putting "peer reviewed" articles on a pedestal. Correct. Peer review is a minimal and necessary but not sufficient step. |
|
|
|
|
| ▲ | freehorse a day ago | parent | prev | next [-] |
| A reviewer is assessing the relevance and "impact" of a paper rather than correctness itself directly. Reviewers may not even have access to the data itself that authors may have used. The way it essentially works is an editor asks the reviewers "is this paper worthy to be published in my journal?" and the reviewers basically have to answer that question. The process is actually the editor/journal's responsibility. |
|
| ▲ | chroma205 a day ago | parent | prev | next [-] |
| > I've always assumed peer review is similar to diff review. Where I'm willing to sign my name onto the work of others. If I approve a diff/pr and it takes down prod. It's just as much my fault, no? No. Modern peer review is “how can I do minimum possible work so I can write ‘ICLR Reviewer 2025’ on my personal website” |
| |
| ▲ | freehorse a day ago | parent | next [-] | | The vast majority of people I see do not even mention who they review for in CVs etc. It is usually more akin to a volunteer based, thankless work. Unless you are an editor or sth in a journal, what you review for does not count much for anything. | |
| ▲ | grayhatter a day ago | parent | prev [-] | | > No. [...] how can I do minimum possible work I don't know, I still think this describes most of the reviews I've seen I just hope most devs that do this know better than to admit to it. |
|
|
| ▲ | bjourne a day ago | parent | prev [-] |
| For ICLR reviewers were asked to review 5 papers in two weeks. Unpaid voluntary work in addition to their normal teaching, supervision, meetings, and other research duties. It's just not possible to understand and thoroughly review each paper even for topic experts. If you want to compare peer review to coding, it's more like "no syntax errors, code still compiles" rather than pr review. |
| |
| ▲ | alexcdot a day ago | parent [-] | | I really like what IJCAI is doing to pay reviewers to do this work, with the $100 fee from authors Yeah its insane the workload reviewers are faced with + being an author who gets a review from a novice |
|