I read this by Michael Brooks in New Scientist a couple days ago, and it got me thinking. The piece essentially questions the effectiveness of the peer review system if it can let through (bunk) studies saying that homeopathic remedies can cure cancer, or that the universe is in fact filled with luminous aether. A couple of the commenters take mild issue with his point. I especially liked this part of Ben Goldacre’s response:
Almost all studies are less than ideal, to a varying degree, because all studies must make methodological concessions to what is practical or affordable. It is the job of academics – and indeed others who are interested – to read each study, and critically appraise its merits and shortcomings.
It’s often good that poor studies are published if they contain any useful information – with everyone spotting that they are poor – and it’s also good that criticisms of them are made in the public domain where all can learn from them. It is even better if the critical comments from peer reviewers are available as well.
The simple fact that something has been published in an academic journal does not mean that the findings are correct (it may have been a fluke, for example) or that the conclusions the researchers ascribe to their own findings are valid (they may have measured something with an inaccurate method, that is prone to systematic error in one direction, or they might be guilty of wishful thinking). If something is published in an academic journal, it means it was an interesting piece of work whose outcome should be available to those who wish to read about it.
One suggestion that I’ve heard on a number of occasions, and that has been put into action at PLoS, is to let the peer review happen after publication, by allowing other scientists to comment and respond on the online paper. It struck me after reading Goldacre’s comment that this is already more or less how we do it. The bulk of the actual reviewing happens after a paper is published, as the other researchers in the field read it, judge it, and discuss it. Most of a study’s street cred (or lack thereof), at least within a scientific community, comes from the collective judgment of its merits by that community.
In general, this works pretty well, because reputation and trust are so powerful within a given field. Someone doing shoddy research will find their stock of both declining rapidly, even if they get it published. The problem, of course, is that if you aren’t part of that research scene it can be really hard to know who has the trust of their colleagues and who doesn’t. John Q. Public, watching some scientist on the news, doesn’t have the specialized knowledge to assess the scientist’s claims. Even worse, he has no way of picking up on the scientist’s reputation through the TV screen. Without either of those things—the ability to critically assess the research, or a good reason to trust the scientist—John Q. will do one of two things. He will either accept the scientist’s story based solely on lab coat and university credentials, or reject it because it seems to go against common sense or threatens his sense of identity somehow. (“Me? From a damn monkey?”)
I’m not sure how to transfer this reputation/trust factor out of a specialized community into a public forum. Making the post-publication peer-review more transparent, a la PLoS, is one approach, though I’m still not sure it deals with this translation of trust. That seems like a basic problem that is hard to get around.