| |
| ▲ | emil-lp 2 days ago | parent | next [-] | | That's not at all how peer review works. | | |
| ▲ | yorwba 2 days ago | parent | next [-] | | It's not how pre-publication peer review works. There, the problem is that many papers aren't worth reading, but to determine whether it's worth reading or not, someone has to read it and find out. So the work of reading papers of unknown quality is farmed out over a large number of people each reading a small number of randomly-assigned papers. If somebody's paper does not get assigned as mandatory reading for random reviewers, but people read it anyway and cite it in their own work, they're doing a form of post-publication peer review. What additional information do you think pre-publication peer review would give you? | | |
| ▲ | adroniser a day ago | parent [-] | | peer review would encourage less hand wavy language and more precise claims. They would penalize the authors for bringing up bizarre analogies to physics concepts for seemingly no reason. They would criticize the fact that they spend the whole post talking about features without a concrete definition of a feature. The sloppiness of the circuits thread blog posts has been very damaging to the health of the field, in my opinion. People first learn about mech interp from these blog posts, and then they adopt a similarly sloppy style in discussion. Frankly, the whole field currently is just a big circle jerk, and it's hard not to think these blog posts are responsible for that. I mean do you actually think this kind of slop would be publishable in NeurIPS if they submitted the blog post as it is? | | |
| ▲ | PeterStuer a day ago | parent [-] | | "peer review would encourage less hand wavy language and more precise claims" In theory, yes. Lets not pretend actual peer review would do this. | | |
| ▲ | adroniser a day ago | parent [-] | | So you think that this blog post would make it into any of the mainstream conferences? I doubt it. | | |
| ▲ | sdenton4 a day ago | parent [-] | | IME: most of the reviewers in the big ML conferences are second-year phd students sent into the breach against the overwhelming tide of 10k submissions... Their review comments are often somewhere between useless and actively promoting scientific dishonesty. Sometimes we get good reviewers, who ask questions and make comments which improve the quality of a paper, but I don't really expect it in the conference track. It's much more common to get good reviewers in smaller journals, in domains where the reviewers are experts and care about the subject matter. OTOH, the turnaround for publication in these journals can take a long time. Meanwhile, some of the best and most important observations in machine learning never went through the conference circuit, simply because the scientific paper often isn't the best venue for broad observation... The OG paper on linear probes comes to mind. https://arxiv.org/pdf/1610.01644 | | |
| ▲ | adroniser a day ago | parent [-] | | Of the papers submitted to a conference, it might be that reviewers don't offer suggestions that would significantly improve the quality of the work. Indeed the quality of reviews has gone down significantly in recent years. But if Anthropic were going to submit this work to peer review, they would be forced to tighten it up significantly. The linear probe paper is still written in a format where it could reasonably be submitted, and indeed it was submitted to an ICLR workshop. |
|
|
|
|
| |
| ▲ | LolWolf a day ago | parent | prev [-] | | What? Yes it is! This is exactly how peer review works! People look at the paper, read it, and then reproduce it, poke holes, etc. Peer review has nothing to do with "being published in some fancy-looking formatted PDF in some journal after passing an arbitrary committee" or whatever, it's literally review by your peers. Now, do I have problems with this specific paper and how it's written in a semi-magical way that surely requires the reader suspend disbelief? For sure, but that's completely independent of the "peer-review" aspect of it. | | |
| ▲ | emil-lp a day ago | parent [-] | | If you believe that citation is the same as review, I have stuff to sell you. Reviewing a paper can easily take 3 weeks full time work. Looking at a paper and assuming it is correct, followed by citing it, can literally take seconds. I'm a researcher and there are definitely two modes of reading papers: review mode and usage mode. | | |
| ▲ | LolWolf a day ago | parent [-] | | No, I don't believe that a citation is the same as peer review, it is, as I understand it from the post you're replying to, an observation that other people have looked at it. (Indeed, if you actually go through and _read_ this particular paper, you can see that they also cite multiple reproductions and extensions of the paper. I would call that peer review—and certainly much better peer review than most nonsense that appears from committees. Ask me how I know.) |
|
|
| |
| ▲ | golem14 2 days ago | parent | prev [-] | | Unless it’s part of a link review farm. I haven’t looked, and you are probably correct; but I would do a bit of research before making any assumptions |
|