Remix.run Logo
mnky9800n 5 days ago

I feel like saying papers pre peer review should be taken with a grain of salt should be stopped. Peer review is not some idealistic scientific endeavour it often leads to bullshit comments, slows down release, is free work for companies that have massive profit margins, etc. From my experience publishing 30+ papers I have received as many bad or useless comments as I have good ones. We should at least default to open peer review and editorial communication.

Science should become a marketplace of ideas. Your other criticisms are completely valid. Those should be what’s front and center. And I agree with you. The conclusions of the paper are premature and designed to grab headlines and get citations. Might as well be posting “first post” on slashdot. IMO we should not see the current standard of peer review as anything other than anachronistic.

chaps 5 days ago | parent | next [-]

Please no. Remember that room temperature superconductor nonsense that went on for way too long? Let's please collectively try to avoid that..

physarum_salad 5 days ago | parent | next [-]

That paper was debunked as a result of the open peer review enabled by preprints! Its astonishing how many people miss that and assume that closed peer review even performs that function well in the first place. For the absolute top journals or those with really motivated editors closed peer review is good. However, often it's worse...way worse (i.e. reams of correct seeming and surface level research without proper methods or review of protocols).

The only advantage to closed peer review is it saves slight scientific embarrassment. However, this is a natural part of taking risks ofc and risky science is great.

P.s. in this case I really don't like the paper or methods. However, open peer review is good for science.

ajmurmann 5 days ago | parent | next [-]

To your point the paper AFAIK wasn't debunked because someone read it carefully but because people tried to reproduce it. Peer reviews don't reproduce. I think we'd be better off with fewer peer reviews and more time spent actually reproducing results. That's why we had a while crisis named after that

jcranmer 5 days ago | parent [-]

> To your point the paper AFAIK wasn't debunked because someone read it carefully but because people tried to reproduce it.

Actually, from my recollection, it was debunked pretty quickly by people who read the paper because the paper was hot garbage. I saw someone point out that its graph of resistivity showed higher resistance than copper wire. It was no better than any of the other claimed room-temperature semiconductor papers that came out that year; it merely managed to catch virality on social media and therefore drove people to attempt to reproduce it.

chaps 5 days ago | parent | prev [-]

To be clear, I'm not saying that peer review is bad!! Quite the opposite.

physarum_salad 5 days ago | parent [-]

Yes ofc! I guess the major distinction is closed versus open peer review. Having observed some abuses of the former I am inclined to the latter. Although if editors are good maybe it's not such a big difference. The superconducting stuff was more of a saga rather than a reasonable process of peer review too haha.

mwigdahl 5 days ago | parent | prev [-]

And cold fusion. A friend's father (a chemistry professor) back in the early 90s wasted a bunch of time trying variants on Pons and Fleischmann looking to unlock tabletop fusion.

tomrod 5 days ago | parent | prev | next [-]

> I feel like saying papers pre peer review should be taken with a grain of salt should be stopped.

Absolutely not. I am an advocate for peer review, warts and all, and find that it has significant value. From a personal perspective, peer review has improved or shot down 100% of the papers that I have worked on -- which to me indicates its value to ensure good ideas with merit make it through. Papers I've reviewed are similarly improved -- no one knows everything and its helpful to have others with knowledge add their voice, even when the reviewers also add cranky items.[0] I would grant that it isn't a perfect process (some reviewers, editors are bad, some steal ideas) -- but that is why the marketplace of ideas exists across journals.

> Science should become a marketplace of ideas.

This already happens. The scholarly sphere is the savanna when it comes to resources -- it looks verdant and green but it is highly resource constrained. A shitty idea will get ripped apart unless it comes from an elephant -- and even then it can be torn to shreds.

That it happens behind paywalls is a huge problem, and the incentive structures need to be changed for that. But unless we want blatant charlatanism running rampant, you want quality checks.

[0] https://x.com/JustinWolfers/status/591280547898462209?lang=e... if a car were a manuscript

srkirk 4 days ago | parent [-]

What happens if (a) the scholarly sphere is continually expanding and (b) no researcher has time to be ripping apart anything? That also suggests (c) Researchers delegate reviewing duties to LLMs.

stonemetal12 5 days ago | parent | prev | next [-]

Rather given the reproducibility crisis, how much salt does peer review nock off that grain? How often does peer review catch fraud or just bad science?

Bender 5 days ago | parent [-]

I would also add, how often are peer reviews the same group of buddy-bro back-scratchers that know if they help that person with a positive peer review that person will return the favor. How many peer reviewers actually reproduce the results? How many peer reviewers would approve a paper if their credentials were on the line?

Ironically, I am waiting for AI to start automating the process of teasing apart obvious pencil whipping, back scratching, buddy-bro behavior. Some believe its in the 1% range of falsified papers and pencil whipped reviews. I expect it to be significantly higher based on reading NIH papers for a long time in the attempt to actually learn things. I've reported the obvious shenanigans and sometimes papers are taken down but there are so many bad incentives in this process I predict it will only get worse.

genewitch 5 days ago | parent [-]

who says it's "1%"? i'd reckon it's closer to 50% than 1%; that could mean 27%, it could mean 40%. I always have this at the back of my mind when i say something, and someone rejects it by citing a paper (or two). I doubt they even read the paper they're telling me to read as proof i am wrong, to start with. And then the "what are the chances this is repro?" itches a bit.

This also ignores the fact that you can find a paper to support nearly everything if one is willing to link people "correlative" studies.

srkirk 4 days ago | parent | prev | next [-]

I believe LLMs have the potential to (for good or ill, depending on your view) destroy academic journals.

The scenario I am thinking of is academic A submitting a manuscript to an academic journal, which gets passed on by the journal editor to a number of reviewers, one of whom is academic B. B has a lot on their plate at the moment, but sees a way to quickly dispose of the reviewing task, thus maintaining a possibly illusory 'good standing' in the journal's eyes, by simply throwing the manuscript to an LLM to review. There are (at least) two negative scenarios here: 1. The paper contains embedded (think white text on a white background) instructions left by academic A to any LLM reading the manuscript to view it in a positive light, regardless of how well the described work has been conducted. This has already happened IRL, by the way. 2. Academic A didn't embed LLM instructions, but receives the review report, which show clear signs that the reviewer either didn't understand the paper, gave unspecific comments, highlighted only typos or simply used phrasing that seems artifically-generated. A now feels aggrieved that their paper was not given the attention and consideration it deserved by an academic peer and now has a negative opinion of the journal for (seemingly) allowing the paper to be LLM-reviewed. And just as journals will have great difficulty filtering for LLM-generated manuscripts, it will also find it very difficult to filter for LLM-generated reviewers reports.

Granted, scenario 2 already happens with only humans in the loop (the dreaded 'Reviewer 2' academic meme). But LLMs can only make this much much worse.

Both scenarios destroy trust in the whole idea of peer-reviewed science journals.

perrygeo 5 days ago | parent | prev [-]

There's two questions at play. First, does the research pass the most rigorous criteria to become widely-accepted scientific fact? Second, does the research present enough evidence to tip your priors and change your personal decisions?

So it's possible to be both skeptical of how well these results generalize (and call for further research), but also heed the warning: AI usage does appear to change something fundamental about our congnitive processes, enough to give any reasonable person pause.