▲ | sdenton4 a day ago | |
IME: most of the reviewers in the big ML conferences are second-year phd students sent into the breach against the overwhelming tide of 10k submissions... Their review comments are often somewhere between useless and actively promoting scientific dishonesty. Sometimes we get good reviewers, who ask questions and make comments which improve the quality of a paper, but I don't really expect it in the conference track. It's much more common to get good reviewers in smaller journals, in domains where the reviewers are experts and care about the subject matter. OTOH, the turnaround for publication in these journals can take a long time. Meanwhile, some of the best and most important observations in machine learning never went through the conference circuit, simply because the scientific paper often isn't the best venue for broad observation... The OG paper on linear probes comes to mind. https://arxiv.org/pdf/1610.01644 | ||
▲ | adroniser a day ago | parent [-] | |
Of the papers submitted to a conference, it might be that reviewers don't offer suggestions that would significantly improve the quality of the work. Indeed the quality of reviews has gone down significantly in recent years. But if Anthropic were going to submit this work to peer review, they would be forced to tighten it up significantly. The linear probe paper is still written in a format where it could reasonably be submitted, and indeed it was submitted to an ICLR workshop. |