| ▲ | directevolve a day ago |
| Punishing researchers who make mistakes or get unlucky due to noise in the data is a recipe for disaster, just like in other fields. The ideal amount of fraud and false claims in research is not zero, because the policing effort it would take to accomplish this goal would destroy all other forms of value. I can't emphasize enough how bad an idea blackballing researchers for publishing irreproducible results would be. We have money to fund direct reproducibility studies (this one is an example), and indirect replication by applying othogonal methods to similar research topics can be more powerful than direct replication. |
|
| ▲ | MostlyStable a day ago | parent | next [-] |
| Completely agree. Given the way that science and statistics work, completely honest researchers that do everything correct and don't make any mistakes at all will have some research that fails to reproduce. And the flip side of that is that some completely correct work that got the right answer, some proportion of the time, the reproduction attempt will incorrectly fail to reproduce. Type 1 and Type 2 errors are both real and occur without any need for misconduct or mistakes. |
|
| ▲ | dilap 21 hours ago | parent | prev | next [-] |
| Well, don't forgot I also said this! > With whatever sort of due process is needed to make this reasonable Is it not reasonable to not continue to fund scientists whose results consistently do not reproduce? And should we not spend the funds to verify that they do (or don't) reproduce (rather than e.g. going down an incredibly expensive goose-chase like recently happened w/ Alzheimer's research)? Currently there is more or less no reason not to fudge results; your chances of getting caught are slim, and consequences are minimal. And if you don't fudge your results, you'll be at a huge disadvantage when competing against everyone that does! Hence the replication crises. So clearly something must be done. If not penalyzing failures to reproduce and funding reproduction efforst, then what? |
| |
| ▲ | jltsiren 20 hours ago | parent [-] | | Your way of thinking sounds alien to me. You seem to assume that people mostly just follow the incentives, rather than acting according to their internal values. Science is a field with low wages, uncertain careers, and relatively little status. If you respond strongly to incentives, why would you choose science in the first place? People tend to choose science for other reasons. And, as a result, incentives are not a particularly effective tool for managing scientists. | | |
| ▲ | dilap 18 hours ago | parent [-] | | Of course people will follow their own internal values in some cases, but we really want to arrange things so that the common and incentived path is the happy path! And without the proper systemic arrangements, people with strong internal values will just tend to get pushed out. E.g., an example from today's NY times: https://archive.is/wV4Sn I don't mean to seem too cynical about human nature; it's not so much that I don't think people with good motivations won't exist, it's that you need to create a broader ecosystems where those motivations are adaptive. Otherwise they'll just get pushed out. By analogy, consider a competitive sport, like bicycling. Imagine if it was just an honor system to not use performance enhancing drugs; even if 99% of cyclists were completely honest, the sport would still be dominated by cheaters, because you simply wouldn't be able compete without cheating. The dynamics are similar in science if you allow for bad research to go unchallenged. (PS: Being a scientist is very high-status! I can imagine very few things with as much cachet at a dinner-party as saying "I'm a scientist".) | | |
| ▲ | jltsiren 16 hours ago | parent [-] | | Internal motivation and acting according to your values is not necessarily a good thing. For example, repeat offenders are often internally motivated. They keep committing crime, because they don't fit in. And because their motivations are internal, incentives such as strict punishments have limited effect on their behavior. Science selects actively against people who react strongly to incentives. The common and incentivized path is not doing science. Competitive sports are the opposite, as they appeal more to externally motivated people. From a scientist's point of view, the honest 99% of cyclists would absolutely dominate the race, as they ride 99% of the miles. Maybe they won't win, but winning is overrated anyway. Just like prestigious awards, vanity journals, and top universities are nice but ultimately not that important. | | |
| ▲ | dilap 13 hours ago | parent [-] | | > Science selects actively against people who react strongly to incentives I don't think this is true at all! If it were true, we would not have the reproducibility crises and various other scandals that we do, in fact, have. Scientists are humans like any other, and respond to incentives. Funding is a game -- you have to play the game in a way that wins to keep getting funding, so necessarily idealists that don't care about the rules of the game will be washed out and not get funding. It's in our collective interest, then, to make sure that winning the game equates to doing good science! | | |
| ▲ | jltsiren 9 hours ago | parent [-] | | In the almost 20 years I've done academic research, I've met thousands of scientists. Some of them have been involved in various scandals, but as far as I know, none of the scandals were about scientific integrity. When it comes to academic scandals, those involving scientific integrity seem to be rare. The reproducibility crisis seems to be mostly about applying the scientific method naively. You study a black box nobody really understands. You formulate a hypothesis, design and perform an experiment, collect data, and analyze the data under a simple statistical model. Often that's the best thing you can do, but you don't get reliable results that way. If you need reliability, you have to build models that explain and predict the behavior of the former black box. You need experiments that build on a large number of earlier experiments and are likely to fail in obvious ways if the foundations are not fundamentally correct. I'm pretty bad at getting grants myself, but I've known some people who are really good at it. And they are not "playing the game", or at least that's not the important part. What sets them apart is the ability to see the big picture, the attention to details, the willingness to approach the topic from whatever angle necessary, and vision of where the field should be going. They are good at identifying the problems that need to be solved and the approaches that will likely solve them. And then finding the right people to solve them. |
|
|
|
|
|
|
| ▲ | JadeNB a day ago | parent | prev [-] |
| > The ideal amount of fraud and false claims in research is not zero, because the policing effort it would take to accomplish this goal would destroy all other forms of value. Surely that just means that we shouldn't spend too much effort achieving small marginal progress towards that ideal, rather than that's not the ideal? I am a scientist (well, a mathematician), and I can maintain my idealism about my discipline in the face of the idea that we can't and shouldn't try to catch and stop all fraud, but I can't maintain it in the face of the idea that we should aim for a small but positive amount of fraud. |
| |
| ▲ | mrguyorama a day ago | parent [-] | | It's not actually "Ideal" is the point. You CANNOT create a system that has zero fraud without rejecting a HUGE amount of legitimate work/requests. This is as true for credit card processing as it is for scientific publishing. There's no such thing as "Reject 100% of fraud, accept 100% of non-fraud". It wouldn't be "ideal" to make our spaceships with anti-gravity drives, it would be "science fiction". The relationship between how hard you prevent fraud and how much legitimate traffic you let through is absurdly non-linear, and super dependent on context. Is there still low hanging fruit on the fraud prevention pipeline for scientific publishing? That depends. Scientists claim that having to treat each other as hostile entities would basically destroy scientific progress. I wholeheartedly agree. This should be obvious to anyone who has approved a PR from a coworker. Part of our job in code review is to prevent someone from writing code to do hostile things. I'm sure most of us put some effort towards preventing obvious problems, but if you've ever seen https://en.wikipedia.org/wiki/International_Obfuscated_C_Cod... or some of the famous bits of code used to hack nation states then you should recognize that the amount of effort it would take to be VERY SURE that this PR doesn't introduce an attack is insane, and no company could afford it. Instead, we assume that job interviews, coworker vibes, and reputation are enough to dissuade that attack vector, and it works for almost everyone except the juiciest targets. Science is a high trust industry. It also has "juicy targets" like "high temp superconductor" or "magic pill to cure cancer", but scientists approach everything with "extreme claims require extreme results" and that seems to do alright. They mostly treated LK-99 with "eh, let's not get hasty" even as most of the internet was convinced it was a new era of materials. I think scientists have a better handle on this than the rest of us. | | |
| ▲ | JadeNB 20 hours ago | parent [-] | | > It's not actually "Ideal" is the point. > You CANNOT create a system that has zero fraud without rejecting a HUGE amount of legitimate work/requests. I think that we are using different definitions of "ideal." It sounds like your definition is something like "practically achievable," or even just "can exist in the real world," in which case, sure, zero fraud is not ideal in that sense. To check whether I am using the word completely idiosyncratically, I just looked it up in Apple Dictionary, and most of the senses seem to match my conception, but I meant especially "2b. representing an abstract or hypothetical optimum." It seems very clear to me that you would agree with zero fraud being ideal in sense "2a. existing only in the imagination; desirable or perfect but not likely to become a reality," but possibly we can even agree that it also fits sense 2b above. |
|
|