I've spent a lot of time recently doing a deep dive into all the academic papers in a particular field of neurobiology, and I have a different approach:
1. Read the title and abstract
2. Read the methodology
3. If the methodology isn't totally asinine, then read the rest
I discovered after reading hundreds of papers that most of them are total nonsense.
I found myself reading them and updating my "knowledge," then getting to the methodology section and realizing it was bullshit. I felt like I couldn't fully "un-update" my model when that happened, the damage was sort of done.
They make claims and they have "evidence" that sounds compelling enough to slip by some filter, but then the methodology is totally bunk. The n is way too small, or limited in some other fundamental way, the experiment design is idiotic cargo cult stuff. Obviously just going through the motions of publishing because they have to, rather than having some valuable insight. Then papers like that cite each other, and build this whole wobbly network, full of sound and fury, and signifying nothing.
I've learned to totally ignore any paper that I haven't read the methodology for first.
Critiquing the methodology section is the best part of reading scientific papers. However, it soon becomes bittersweet as when you implement your own experiments, you soon become aware of how difficult it is to design a good experiment. That being said, if journalists read methodology sections (or anyone, really) then the world would be a much better (and less sensationalised) place.
1. Read the title and abstract 2. Read the methodology 3. If the methodology isn't totally asinine, then read the rest
I discovered after reading hundreds of papers that most of them are total nonsense.
I found myself reading them and updating my "knowledge," then getting to the methodology section and realizing it was bullshit. I felt like I couldn't fully "un-update" my model when that happened, the damage was sort of done.
They make claims and they have "evidence" that sounds compelling enough to slip by some filter, but then the methodology is totally bunk. The n is way too small, or limited in some other fundamental way, the experiment design is idiotic cargo cult stuff. Obviously just going through the motions of publishing because they have to, rather than having some valuable insight. Then papers like that cite each other, and build this whole wobbly network, full of sound and fury, and signifying nothing.
I've learned to totally ignore any paper that I haven't read the methodology for first.