Science has and most likely be two steps forward and one backward. Purportedly the use of "peer review" was to enable the culling out of wrong results. As I have argued for a while in today's world of open Internet critique, in a non Anonymous manner, allows for improved openness in research publications. Namely publications should be open and subject to criticisms from named people with some form of editing, for keeping things professional and not allowing the Internet type of bantering that is non-productive.
A few recent articles discuss this point. One by Marcus give an reasoned overview of some of the more recent critics. He states:
The half-life of nonsensical findings has decreased enormously, sometimes even to before the paper has officially been published.” The wholesale shift in the culture of how scientists think about their craft is at least as significant a meta-story as the replicability crisis itself. But the prophets of doom never let their readers in on this happy secret.
It is absolutely correct for onlookers to call for increased skepticism and clearer thinking in science writing. I’ve sometimes heard it said, with a certain amount of condescension, that this or that field of science “needs its popularizers.” But what science really needs is greater enthusiasm for those people who are willing to invest the time to try to sort the truth from hype and bring that to the public. Academic science does far too little to encourage such voices.
That is there is a tendency by certain "scientists" to seek significant press coverage and all too often it is their desire for such coverage and the weakness of their speculations that give rise to the bad reputations. Replicating results is not what scientists like to do, they like to build on results. It is when they are building on them that at times the foundation may be made of clay or even sand. But it is self correcting. The issue is can we find out about the feet of clay before time is wasted? Good question. But perhaps an open literature would facilitate this.
Horgan makes some remarks on this topic, of which Marcus responded, and he states:
My 1985 investigation of Petrofsky, which I toiled over for months, made my editor so nervous that he wanted to bury it in the back pages of The Institute; I had to go over his head to persuade the publisher that my article deserved front-page treatment. After the article came out, the IEEE formed a panel to investigate not Petrofsky but me. The panel confirmed the accuracy of my reporting.
Since then, I keep struggling to find the right balance between celebrating and challenging alleged advances in science. After all, I became a science writer because I love science, and so I have tried not to become too cynical and suspicious of researchers. I worry sometimes that I’m becoming a knee-jerk critic. But the lesson I keep learning over and over again is that I am, if anything, not critical enough.
Arguably the biggest meta-story in science over the last few years—and one that caught me by surprise–is that much of the peer-reviewed scientific literature is rotten. A pioneer in exposing this vast problem is the Stanford statistician John Ioannidis, whose blockbuster 2005 paper in PLOS Medicine presented evidence that “most current published research findings are false.”
"False" may be a bit too strong and "most" may be a bit exaggerated. Ioannidis states:
Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.
Ioannidis has an interesting statistical methodology but when facts are taken into account one may still question his results. Thus grabbing headlines is always problematical. Was Watson and Crick wrong? We know that Pauling was just weeks before, his triple helix. Does that make almost everything wrong. It is after open speculation with an expressed thought process and measurements. Yet at time we have true fraud. It seems however to be often caught. The essence of the Ioannidis paper is that upon examining the research results he just claimed them to have less power in their results than claimed or alleged. This does not make it "wrong" as much as a fully disclosed data set which may be stretched a bit too far.
In the Economist recently there was another rant:
The idea that the same experiments always get the same results, no
matter who performs them, is one of the cornerstones of science’s claim
to objective truth. If a systematic campaign of replication does not
lead to the same results, then either the original research is flawed
(as the replicators claim) or the replications are (as many of the
original researchers on priming contend). Either way, something is awry. To err is all too common.
The article continues:
Statisticians have ways to deal with such problems. But most scientists
are not statisticians. Victoria Stodden, a statistician at Columbia,
speaks for many in her trade when she says that scientists’ grasp of
statistics has not kept pace with the development of complex
mathematical techniques for crunching data. Some scientists use
inappropriate techniques because those are the ones they feel
comfortable with; others latch on to new ones without understanding
their subtleties. Some just rely on the methods built into their
software, even if they don’t understand them.
Yes statisticians have evolved their techniques. But beyond that there are even better approaches. I recall in the late 1960s as we developed nonlinear estimation theory that many statisticians had no clue. These same techniques can now be used in genomic networks, will the statisticians reject that also?
The article continues:
Scientific papers are a process. They report results, hopefully correctly and fully. From that statisticians can then complain that not enough data was used. Let that go to the side. When journalists then catch on and push the story even further is where we have the problem. In none of the above papers do we see that problem reflected. Marcus does comment upon his experience but the others defer.
In an Atlantic article the author states:
Though scientists and science journalists are constantly talking up the value of the peer-review process, researchers admit among themselves that biased, erroneous, and even blatantly fraudulent studies easily slip through it. Nature, the grande dame of science journals, stated in a 2006 editorial, “Scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.” What’s more, the peer-review process often pressures researchers to shy away from striking out in genuinely new directions, and instead to build on the findings of their colleagues (that is, their potential reviewers) in ways that only seem like breakthroughs—as with the exciting-sounding gene linkages (autism genes identified!) and nutritional findings (olive oil lowers blood pressure!) that are really just dubious and conflicting variations on a theme.
Again, the inherent deficiencies of the classic peer review. One should recall that peer review as classically understood is the taking of an article and then sending it to persons who the editor knows and who are assumed to know something. Then the reviewer performs an Anonymous review. It may take months and may even be done by some graduate student as a professional building exercise. Then the author gets a pile of the reviews with comments from unknowns. All too often no one "reviews" the experiment. They just try to see if holds up to what they fell is the way it should be done. If it is something new then most reviewers reject it. Often rejection says it was done before by someone else but no reference is given. The Editor all too often assumes the reviewer is "without sin". In fact the reviewer may be both clueless and an interested third party. Thus unlike the comment above, peer review may actually be a barrier to entry or a way to justify prior work. This is especially true when they are anonymous.
Thus again, the Internet age allows for expanded review, a continual process, if the formula is correct.