Peer Review Quality is Independent of Open Access

(Image: North Carolina State University)

Editor’s Note: A new report from the journal Science indicates that there are serious problems with the peer-review process at many open access journals. However, the issue may not be as clear as the article suggests. To get a different perspective, I solicited this guest post from Jon Tennant, an open access advocate and Ph.D. student at Imperial College London. Jon also blogs at Green Tea and Velociraptors and is one of the folks behind the Palaeocast podcast series. I’ll let Jon take it from here.

Open access is broken as a system. Or at least, that’s what a recent article published in Science seems to indicate. The article argues that a “sting operation,” conducted by John Bohannon, demonstrates that open access (OA) publishing is deeply flawed, because numerous OA journals accepted a scientifically and ethically flawed “spoof paper” for publication.

However, I’d argue that the article actually does little to damage the reputation and clear benefits of open access research, and instead exposes serious flaws in the traditional publishing model.

Bohannon submitted the spoof paper under fake names, getting 157 article acceptances and 98 rejections, all from OA journals that pursue the “gold” route to access, whereby you pay an up-front fee after acceptance to make the article instantly available and re-usable for free. (Note: only about 29 percent of journals actually charge publication fees.)

On the face of it, this seems pretty bad for OA. And it is really quite bad. But it’s bad for publishing in general, not just open access. With no control group for non-open access journals, we can’t tell what the acceptance rate of phony manuscripts is in general, making it impossible to say anything about the relative strengths between OA journals and conventional ones. Ironically, this is one of the scientific flaws deliberately inserted into the paper, where the [made-up] authors “found” a drug effective on cancerous cells, but without testing the effect on healthy cells. It also says nothing about OA journals beyond those sampled, particularly in journals for other scientific fields.

Jon Tennant

To his credit, Bohannon is quoted here as saying “You can’t conclude that [online open access is a failure] from my experiment, because I didn’t do the right control – submitting a paper to paid-subscription journals.” Bohannon’s concession makes the Science piece even more awkward, as its article implies that the results of this study show that OA is a broken or dangerous business model: “Some say that the open-access model itself is not to blame for the poor quality control revealed by Science’s investigation. If I had targeted the bottom tier of traditional, subscription-based journals, [David] Roos told me, ‘I strongly suspect you would get the same result.’”

Peer review is the process where experts in the field give feedback and comments on a research manuscript, before the paper is passed on to a journal editor, who then decides whether it should be published. In the majority of cases, these are volunteer academics (both the reviewers and editors). Peer review is the supposed gold standard for research articles, designed to apply rigour and scrutiny and weed out the bad research. However, it is important to note that having an article accepted through peer review does not make it correct forever.

It means that the two or three reviewers, based on their experience, found the article to be scientifically sound and acceptable as a progression of knowledge at that point in time. Most research articles begin with a literature review in the introduction, which is, in its own way, a form of post-publication peer review. It’s not a perfect system for spotting bad science, but it’s the best we’ve got. Discussions about whether or not peer review is adequate in its current form are very much on-going (see here and here).

What’s more interesting to me are the editorial decisions that led to acceptance. Thirty-six of the 304 submissions had review comments outlining the flaws in the spoof paper, but 16 of the editors still accepted the papers. A mismatch between the targets of peer-review and the role of editors, perhaps? I’d love to see a study into this on a broader scale.

Andy Farke, a scientist, commented on Twitter by asking which of the journals submitted to would pass a simple “sniff test.” Often, these suspect journals look suspect just from a cursory glance at their websites. It would be interesting to see how many articles these journals had published since origination too – I’ve came across some which have published almost nothing, so some of the journals could have been little more than a name. This interactive map shows that many of the accepted articles were in India and Africa, which may say something about the intensive pressures to publish work there. Perhaps this is a wake-up call to researchers in those places to educate properly on publishing ethics.

Matt Cockerill, co-founder of BioMed Central (the first for-profit OA publisher) said on Twitter that Springer and BioMed Central both rejected the paper. Elsevier, Sage, and Wolters Kluwer are all publishers with journals that accepted the paper. Sage apologised and Wolters Kluwer shut down the journal in question immediately. But Elsevier claimed they had nothing to do with it, and they were “just holding the journal for a friend.” (Actually, they said “we publish it for someone else” in the Science article.)

Two journals published by Hindawi (a mega-publisher of OA journals), and PLOS ONE, a megajournal often criticised for only conducting “peer-review lite,” both rejected the paper – which is good news for them, and bad news for their critics. (Note: In my opinion, PLOS ONE does precisely what is needed to assess scientific and methodological soundness.)

So, frustratingly, the supposed publisher of top-class science, Science, has issued an article on a clearly flawed study that seems to be nothing more than a futile attempt to undermine the global progression of open access. The article may have been sexy, and will most likely be damaging, but is it good quality? I’d reject this as a reviewer. Of course, it would be prudent not to mention that, as retraction rate is strongly correlated with impact factor, Science with its impact factor of around 31 has one of the highest retraction rates for bad science in the business. Unfortunately, this hasn’t stopped other media outlets from picking it up and using it to “expose” the flaws in OA publishing (e.g., The Independent).

There’s an important point here that bears repeating. This is an analysis of peer review, which was conducted solely on OA journals – it shows nothing of the relative quality of peer-review between open access and non-open access journals. With new models of open, transparent peer review being developed in open access journals (see PeerJ and F1000 for examples), the old model of closed pre-publication peer-review is being pushed slowly into the shadows, and it seems perhaps not a moment too soon. It is hardly the “Wild West” of publishing as Bohannon and Science would have you believe. Furthermore, it shows that the academic and publishing communities need to work better to build an identification system for unscrupulous journals and publishers, akin to Jeffrey Beall’s infamous “Predatory Publishers” list.

Another Science article published at the same time as the OA sting article quotes Vitex Tracz, founder of F1000Prime and BioMed Central, as saying “peer review is sick and collapsing under its own weight.” While this may be true, it certainly applies to both OA and traditional publishing, and certainly should not be used to undermine the overwhelming benefits of open access research. All this sting operation has achieved is to demonstrate that Science is a blind bee, which stung itself in its confusion.

And for those worried about predatory publishers, I believe Mike Taylor said it best, with something along the lines of: scientists aren’t stupid; we know good venues, we know how to find them, and we can spot dodgy emails from a mile off. The way to combat them, simply, is don’t publish with them!

“The problem with saying that open access enables internet scamming is like saying that the problem with the international finance system is that it enables Nigerian wire transfer scams” – Michael Eisen

8 thoughts on “Peer Review Quality is Independent of Open Access

  1. I think this open-access investigation by Science comes as a reaction to the embarrassment the journal faced when they published a much hyped NASA-funded astrobiological study on bacteria surviving on arsenic that was refuted last summer. Apparently, according to the article below, Science’s editors were too dazed by the “sexiness” of the research to carry out a robust critique of the claims in the paper, and may have intentionally selected sympathetic reviewers. Certainly reviewer comments such as “Reviewing this paper was a rare pleasure” in a journal with 7% acceptance rate is indeed rare. Though open-access journals are not innocent by any long shot, there are some respectable ones that carry out proper peer-review and are staffed by professionals.

    http://www.usatoday.com/story/tech/columnist/vergano/2013/02/01/arseniclife-peer-reviews-nasa/1883327/

    Like

  2. Commenter

    More importantly, a certain big company announced publicly, that from a large number of published studies (likely mostly in traditional, well-reputed journals) over 90% turned non-reproducible.

    If this result is typical, traditional publishing is dead. Nobody will trust something which is 90% rubbish. This investigation seems a way to shift the blame to other journals – ‘it is not us, it is them’.

    Like

  3. Hi Jon,
    I saw this discussion and think it may be the link to the Ioannidis article that you want
    http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0020124

    Citation: Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124

    I also remember having seen another link which for the moment eludes me

    Fact is that the way science is done , many of the findings turn out to be untrue after a couple of years at most.

    Like

  4. Pingback: [BLOCKED BY STBV] Gender Bias, the Shutdown and Bad Pitches – 2013 Third Quarter Roundup (Much Delayed) › Communication Breakdown

  5. Pingback: [BLOCKED BY STBV] My year in 2013 | Green Tea and Velociraptors

  6. Pingback: [BLOCKED BY STBV] #SciLogs Weekly: Shutdown, Open Access, SciFi & New Blog Manager › Community Blog

Leave a comment