Remix.run Logo
hodgehog11 4 days ago

Scepticism is generally always a good idea with ML papers. Once you start publishing regularly in ML conferences, you understand that there is no traditional form of peer review anymore in this domain. The volume of papers has meant that 'peers' are often students coming to grips with parts of the field that rarely align with what they are asked to review. Conference peer review has become a 'vibe check' more than anything.

Real peer review is when other experts independently verify your claims in the arXiv submission through implementation and (hopefully) cite you in their followup work. This thread is real peer review.

dleeftink 4 days ago | parent | next [-]

I appreciate this insight, makes you wonder, why even publish a paper if it only amounts to a vibe check? If it's just the code we need we can get that peer reviewed through other channels.

thfuran 4 days ago | parent [-]

Because publications is the number that academics have to make go up.

hodgehog11 4 days ago | parent | next [-]

This and the exposure. There are so many papers on arXiv now that people often look to conference or journal publication lists.

dleeftink 3 days ago | parent | prev [-]

The number has clearly ceased its function, so what are we chasing?

gavinray 3 days ago | parent [-]

Clout, funding, and employment I'd imagine?

rapatel0 4 days ago | parent | prev | next [-]

THIS is so true but also not limited to ML.

Having been both a publisher and reviewer across multiple engineering, science, and bio-medical disciplines this occurs across academia.

naasking a day ago | parent | prev [-]

> Once you start publishing regularly in ML conferences, you understand that there is no traditional form of peer review anymore in this domain.

Which is fine, because peer review is not a good proxy for quality or validity.