| ▲ | mike_hearn 2 hours ago | |
Thanks for reading it, or scan reading it maybe. Of the 18 papers discussed in the essay here's what they're about in order: - Alzheimers - Cancer - Alzheimers - Skin lesions (first paper discussed in the linked blog post) - Epidemiology (COVID) - Epidemiology (COVID, foot and mouth disease, Zika) - Misinformation/bot studies - More misinformation/bot studies - Archaeology/history - PCR testing (in general, discussion opens with testing of whooping cough) - Psychology, twice (assuming you count "men would like to be more muscular" as a psych claim) - Misinformation studies - COVID (the highlighted errors in the paper are objective, not subjective) - COVID (the highlighted errors are software bugs, i.e. objective) - COVID (a fake replication report that didn't successfully replicate anything) - Public health (from 2010) - Social science Your summary of this as being about a "valid and common but subjective political dispute" I don't agree is accurate. There's no politics involved in any of these discussions or problems, just bad science. Immunology has the same issues as most other medical fields. Sure, there's also fraud that requires genuinely deep expertise to find, but there's plenty that doesn't. Here's a random immunology paper from a few days ago identified as having image duplications, Photoshopping of western blots, numerous irrelevant citations and weird sentence breaks all suggestive that the paper might have been entirely faked or at least partly generated by AI: https://pubpeer.com/publications/FE6C57F66429DE2A9B88FD245DD... The authors reply, claiming the problems are just rank incompetence, and each time someone finds yet another problem with the paper leading to yet another apology and proclamation of incompetence. It's just another day on PubPeer, nothing special about this paper. I plucked it off the front page. Zero wet lab experience is needed to understand why the exact same image being presented as two different things in two different papers is a problem. And as for other fields, they're often extremely shallow. I actually am an expert in bot detection but that doesn't help at all in detecting validity errors in social science papers, because they do things like define a bot as anyone who tweets five times after midnight from a smartphone. A 10 year old could notice that this isn't true. | ||