What lesson do rising retraction rates hold for peer review?

What lesson do rising retraction rates hold for peer review?
Reading journals is not fun anymore. Credit: Roger Corman, CC BY

In January, Haruko Obokata and colleagues published two papers in the journal Nature suggesting that a simple acid bath can convert differentiated cells back to a stem-cell-like state. This finding, if true, would be revolutionary. Last week, however, after five months of debate among peers, the papers have been retracted.

This incident is part of a larger trend. The rate of retractions of scientific papers has been growing over the past decade, suggestive to some of a crisis of confidence in . Can we no longer trust the scientific literature? Is peer review dysfunctional?

Retractions reveal both science's weakness and its strength. Science frequently goes wrong; that's its weakness. Then science corrects itself; that's its strength. And yet there's a lesson in the rising rate of retractions.

Amplifying the noise in the system

When a scientific finding is published, our major indicator of its reliability and importance is the prestige of the journal where it appears. So when Obokata's findings appeared in Nature, one of the top journals, the world paid attention. The story was reported in mass media across the globe. It is difficult to estimate the cost of confusing the world with an incorrect a message at this scale.

The problem is not that science, for five months, was in a state of confusion about Obokata's claims. Confusion in science is part of the process of working things out. The problem is that the message of the paper was amplified to global visibility, before the field could resolve its confusion.

In the current system of prepublication peer review, a paper is evaluated before publication by a small number of other scientists (typically three or four). Such reviews formed the basis for presenting Obokata's claims, as fact, to the whole world.

When one of us makes a claim (by submitting a paper), it would seem wise not to blurt it out to the whole world after just four of us (the peer reviewers) have had a look at it.

There's a clear lesson in the Obokata story and the general trend of rising retraction rates. It was prepublication peer review that failed to catch the error. And it was postpublication peer review, the open debate on the web, that corrected the path of science.

Nature, Science, and other prestige journals are run by talented people who have every incentive to publish the best research. Their review process is professional and their reviewers are highly qualified. However, three or four reviewers asked to comment within a couple of weeks cannot achieve the breadth or depth of evaluation that an open discussion by hundreds of scientists can achieve over several months.

We need this sort of open evaluation among peers before we can justify alerting the entire world. The aura of prestige journals grossly overstates the actual confidence we can have in a scientific result when it first appears. Slight tweaks to the review process as discussed in a Nature Editorial reflecting on the Obakata story will not solve the problem. Even dramatic changes, such as doubling the number of reviewers or requiring independent replication, would fall short – as long as peer review is restricted to the prepublication phase.

Prepublication peer review is inadequate

Prepublication peer review is flawed for three reasons. First, it is restricted to a small number of people, the editors and peer reviewers. To bring the brain power of the entire community of peers into the evaluation process, the paper has first to be made publicly available – that is, published. Second, prepublication peer review is conducted in secret. Since the paper is not yet published, the review process as well is hidden from public scrutiny. Typically, the reviewers are anonymous and their reviews secret. There is thus no strong disincentive to self-serving or subtly biased reviewing. Third, the review process delays publication. When conducted quickly, it may lack thoroughness. When given more time, it slows down the progress of science. The present model suffers from both of these drawbacks.

Establishing the reliability of a finding is only half the challenge. The other half is assessing the implications and importance of a study. Prepublication peer review falls short on both counts. Understanding the full implications of a study, too, requires an open peer debate.

We've inherited the current system from the pre-internet age. Back when articles needed to be printed on physical paper, we needed to filter before publication to control the costs. Today the internet enables us to "publish then filter", to use Clay Shirky's useful phrase. This will revolutionise scientific publishing. For the moment, however, the current system is held in place by historical inertia, our habits, and the financial interests of the publishing industry.

Open evaluation

The emerging alternative model is open evaluation (OE), a transparent public process of peer review and rating after publication. All scientific papers, in such a system, would be instantly published in an open access model, where everyone can read them. They would then be vetted and ranked postpublication in an ongoing fashion.

The transition is not going to be easy or swift, but recent developments and a growing number of startup companies are moving in the right direction. Pubmed, a respository of science publications, has established a forum called PubMed Commons, where scientists can leave comments on any paper. PLOS Open Evaluation provides a web-based system for sampling opinions on papers through ratings. New journals including F1000 Research and ScienceOpen rely entirely on postpublication .

Once open evaluation ratings on published papers become available, scientists and journalists will no longer be dependent on the impact factor of the journal as the only immediately available indication to a new paper's reliability and importance.

A decade from now, Nature, or its successor in prestige science publishing, might pick the most exciting among previously published studies that have fared well through months of open evaluation. With the evaluation taken care of, the publishers will focus on helping authors communicate the findings to an audience that extends to other fields and beyond science. Had Obokata and colleagues published their findings first for their peers, the flaws of the papers would have been exposed before alerting the world. It would have saved us a lot of confusion.

Explore further

Does science need 'open evaluation' in addition to 'open access?'

Journal information: Nature , Science

This story is published courtesy of The Conversation (under Creative Commons-Attribution/No derivatives).
The Conversation

Citation: What lesson do rising retraction rates hold for peer review? (2014, July 9) retrieved 1 July 2022 from https://phys.org/news/2014-07-lesson-retraction-peer.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Feedback to editors