Remix.run Logo
scythmic_waves a day ago

> as a code reviewer [you] are only expected to review the code visually and are not provided the resources required to compile the code on your local machine to see the compiler fail.

As a PR reviewer I frequently pull down the code and run it. Especially if I'm suggesting changes because I want to make sure my suggestion is correct.

Do other PR reviewers not do this?

dataflow a day ago | parent | next [-]

I don't commonly do this and I don't know many people who do this frequently either. But it depends strongly on the code, the risks, the gains of doing so, the contributor, the project, the state of testing and how else an error would get caught (I guess this is another way of saying "it depends on the risks"), etc.

E.g. you can imagine that if I'm reviewing changes in authentication logic, I'm obviously going to put a lot more effort into validation than if I'm reviewing a container and wondering if it would be faster as a hashtable instead of a tree.

> because I want to make sure my suggestion is correct.

In this case I would just ask "have you already also tried X" which is much faster than pulling their code, implementing your suggestion, and waiting for a build and test to run.

tpoacher a day ago | parent | prev | next [-]

I do too, but this is a conference, I doubt code was provided.

And even then, what you're describing isn't review per se, it's replication. In principle there are entire journals that one can submit replication reports to, which count as actual peer reviewable publications in themselves. So one needs to be pragmatic with what is expected from a peer review (especially given the imbalance between resources invested to create one versus the lack of resources offered and lack of any meaningful reward)

Majromax a day ago | parent [-]

> I do too, but this is a conference, I doubt code was provided.

Machine learning conferences generally encourage (anonymized) submission of code. However, that still doesn't mean that replication is easy. Even if the data is also available, replication of results might require impractical levels of compute power; it's not realistic to ask a peer reviewer to pony up for a cloud account to reproduce even medium-scale results.

lesam a day ago | parent | prev | next [-]

If there’s anything I would want to run to verify, I ask the author to add a unit test. Generally, the existing CI test + new tests in the PR having run successfully is enough. I might pull and run it if I am not sure whether a particular edge case is handled.

Reviewers wanting to pull and run many PRs makes me think your automated tests need improvement.

Terr_ a day ago | parent | prev | next [-]

I don't, but that's because ensuring the PR compiles and passes old+new automated tests is an enforced requirement before it goes out.

So running it myself involves judging other risks, much higher-level ones than bad unicode characters, like the GUI button being in the wrong place.

grayhatter a day ago | parent | prev | next [-]

> Do other PR reviewers not do this?

Some do, many, (like peer reviewers), are unable to consider the consequences of their negligence.

But it's always a welcome reminder that some people care about doing good work. That's easy to forget browsing HN, so I appreciate the reminder :)

vkou a day ago | parent | prev [-]

> Do other PR reviewers not do this?

No, because this is usually a waste of time, because CI enforces that the code and the tests can run at submission time. If your CI isn't doing it, you should put some work in to configure it.

If you regularly have to do this, your codebase should probably have more tests. If you don't trust the author, you should ask them to include test cases for whatever it is that you are concerned about.