▲ | gwerbret 4 days ago | ||||||||||||||||
> There is a simple test the public can use for any scientific model: does it make accurate predictions, or not? You don't need to understand how a model works to test that. It's quite obvious from your position on this matter that you're not a practicing scientist, so it's very unfortunate that your position is so assertive, as it's mostly wrong. To understand the predictions, as it were, you do have to understand the experiments; if you don't, you have no way of knowing if the predictions actually match the outcomes. Most publications involve some form of hypothesis-prediction-experiment-result profile, and it is the training and expertise (and corroboration by other experiments, and time) that help determine which of those papers establish new science, and which ones go out with last week's trash. The findings in these areas are seldom accessible until the field is very advanced and/or in practical use, as with the example of GPS you gave elsewhere. > The biggest problem I see with "establishment" science today is that it doesn't work this way. There is no mechanism for having an independent record that the public can access of predictions vs. reality. There is; it's called a textbook. | |||||||||||||||||
▲ | pdonis 3 days ago | parent | next [-] | ||||||||||||||||
> It's quite obvious from your position on this matter that you're not a practicing scientist You're correct, I'm not. But I'm also not scientifically ignorant. For example, I actually do understand how GPS works, because I've read and understood technical treatments of the subject. But I also know that I don't have to have any of that knowledge to know that my smartphone can use GPS to tell me where I am accurately. In other words, it's quite obvious from your position that you haven't actually thought through what the test I described actually means. > To understand the predictions, as it were, you do have to understand the experiments; if you don't, you have no way of knowing if the predictions actually match the outcomes. Sure you do. See my examples of GPS and astronomers' predictions of comet trajectories downthread in response to MengerSponge. It's true that for predictions of things that the general public doesn't actually have to care about, often it's not really possible to check them without a fairly detailed knowledge of the subject. But those predictions aren't the kind I'm talking about--because they're about things the general public doesn't actually have to care about. > There is; it's called a textbook. Textbooks aren't independent. They're written by scientists. I'm talking about a record that's independent of scientists. For example, being able to verify that GPS works by seeing that your smartphone shows you where you are accurately. | |||||||||||||||||
▲ | jiggawatts 4 days ago | parent | prev [-] | ||||||||||||||||
An example of this ideal can go horribly wrong is CERN. There's one apparatus (of each type) and each "experiment" ends up with its own team. Each team develops their own terminology, publishes in one set of papers, and the peer reviews are by... themselves. I don't work at CERN, but that criticism was from someone who does. They were complaining that they could not understand the papers published by a team down the hall from them. Not on some wildly unrelated area of science, but about the same particles they were studying in a similar manner! If nobody else can understand the research, if nobody else can reproduce it, then it's not useful science! Note that this isn't exactly the same as Sabine's criticism of CERN and future supercolliders, but it's related. | |||||||||||||||||
|