| ▲ | tjohns 4 hours ago | |||||||
I think it's good to read papers and be curious. It's also good to work with your doctors (as you seem to have done), have a discussion, and mutually agree on a plan of treatment. Experts don't know everything. But they probably know some things you don't, and can think of questions you might not to have even thought to ask. As the saying goes, "you don't know what you don't know". Experience matters. There's also a lot of people out there without an academic background that don't know how to properly read journal papers. It's common to see folks do a quick search on PubMed, cherry-pick a single paper they agree with, and treat it as gospel - even if there's no evidence of repeatability. These skills are not something that many people outside STEM are exposed to. | ||||||||
| ▲ | dekhn 4 hours ago | parent [-] | |||||||
Cherrypicking is bad, but worse is reading a paper and thinking you understand what it says, when you don't actually understand what it says. Or thinking that a paper and its data can be observed neutrally as a factual and accurate statement for what work was actually done. My experience in journal club- basically, a group of grad students who all read a paper and then discuss it in person- taught me that most papers are just outright wrong for technical reasons. I'd say about 1 in 5 to 1 in 10 papers passes all the basic tests, and even the ones that do pass can have significant problems. For example, there is an increasing recognition that many papers in biology and medicine have fake data, or manipulated data, or corrupted data, or incorrectly labelled data. I know folks who've read papers and convinced themselvs the paper is good, when later the paper was retracted because the authors copied a few gels into the wrong columns... | ||||||||
| ||||||||