Fixing published research mistakes not easy; fixing the publishing system may be harder

February 4, 2016 by Bob Shepard, University of Alabama at Birmingham
Fixing published research mistakes not easy; fixing the publishing system may be harder

A commentary published today in Nature suggests that the process for fixing mistakes in peer-reviewed research articles is flawed. The article, written by scientists at the University of Alabama at Birmingham, points out that journals are slow to respond and even slower to take action when questions regarding the accuracy of a published research paper are raised.

The authors say that, in the course of assembling weekly lists of articles on obesity and nutrition, they began to notice more peer-reviewed articles containing what they refer to as 'substantial or invalidating errors.' "What was striking was how severe some of these errors were, involving mathematically impossible values, probabilities greater than one, weight loss results that, if true, would have required that adults had grown over 6 centimeters in height in two months, to name just a few," said David B. Allison, Ph.D., leader of the research team and associate dean for Science in the UAB School of Public Health.

"These errors involved factual mistakes or practices which veered substantially from clearly accepted procedures in ways that, if corrected, might alter a paper's conclusions," said Andrew Brown, Ph.D., a scientist in the UAB School of Public Health and co-author of the commentary. "In several cases, our noting these errors led to retractions of the papers containing them."

Brown says the team attempted to address more than 25 of these errors with letters to authors or journals. Their efforts revealed invalidating practices that occur repeatedly and showed how journals and authors react when faced with mistakes that need correction.

"We learned that post-publication peer review is not consistent, smooth or rapid," Allison said. "Many journal editors and staff seemed unprepared to investigate, take action or even respond. Too often, the process spiraled through layers of ineffective emails among authors, editors and unidentified journal representatives, often without any public statement's being added to the original article."

During the informal 18-month review of literature, the authors found a number of recurring problems:

  • Editors are often unprepared or reluctant to take speedy and appropriate action
  • Where to send expressions of concern is unclear
  • Journal staff who acknowledged invalidating errors were reluctant to issue retractions or even timely expressions of concern
  • Some journals may charge fees to authors who report the issues to correct others' mistakes (more than $1,000)
  • No standard mechanism exists to request raw data for review to confirm the errors
  • Concerns expressed through online forums are easily overlooked and are not connected in a way to be found by readers of the article in question

The authors observed that there is little formal guidance for post-publication corrections. They recommend that journals should standardize their submission and peer-review processes, establish clear protocols to address expressions of concern, and waive publication fees associated with those expressions of concern.

Further suggestions include creating an environment to address readers' concerns rapidly and provide clear information on how and to whom such concerns should be addressed.

"We also think it is very important to create an understanding that such expressions of concern are not a condemnation of the work, but should be viewed as an alert that the work is undergoing further scrutiny," said co-author Kathryn A. Kaiser, Ph.D.

Additional recommendations suggest journals and statistical experts should work together to identify common statistical mistakes and that authors and journals should be prepared to share data and analysis code quickly when questions arise.

The authors noted common statistical in many of the studies, including mistaken design or analysis of cluster randomized trials, miscalculation in meta-analyses, and inappropriate baseline comparisons.

The authors acknowledge that their work did not constitute a formal survey and suggest that a more formal, systematic survey is needed to establish whether their experiences are representative of science in general.

"Ideally, anyone who detects a potential problem with a study will engage, whether by writing to and editors or by commenting online, and will do so in a collegial way," Brown said. "Scientists who engage in post-publication review often do so out of a sense of duty to their community, but this important work does not come with the same prestige as other scientific endeavors."

"Robust science needs robust corrections," Allison added. "It is time to make the process less onerous."

Explore further: bioRxiv preprints can now be submitted directly to leading research journals

More information: Reproducibility: A tragedy of errors. … dy-of-errors-1.19264

Related Stories

The ins and outs of peer review

January 26, 2016

If you are at all familiar with the operation of the Intergovernmental Panel on Climate Change (IPCC) you will know that, while the various authors are (unpaid) professionals of one sort or another with their own research ...

Open peer review could result in better quality of peer review

September 29, 2015

Whether or not a research article has been peer reviewed openly can seemingly make a difference to the quality of the peer review, according to research carried out by BioMed Central's Research Integrity Group and Frank Dudbridge ...

Publisher retracts 64 articles for fake peer reviews

August 19, 2015

(—German based publishing company Springer has announced on its website that 64 articles published on ten of its journals are being retracted due to editorial staff finding evidence of fake email addresses for ...

Peer review option proposed for biodiversity data

October 25, 2012

Data publishers should have the option of submitting their biodiversity datasets for peer review, according to a discussion paper commissioned by the Global Biodiversity Information Facility (GBIF).

Simple errors limit scientific scrutiny

November 11, 2015

Researchers have found more than half of the public datasets provided with scientific papers are incomplete, which prevents reproducibility tests and follow-up studies.

Recommended for you


Adjust slider to filter visible comments by rank

Display comments: newest first

2.6 / 5 (5) Feb 04, 2016
And some people continue to hound a group of people whose ideas are not the "Bought and Paid For" 'Mainstream Ideas' that the paid anti-REAL-science trolls are about discrediting. This shows WHY:the whole 'Peer Review Process', as it is now, is as much of a sham as is the suck-puppetly gamed points system for these comments.

There are paid individuals whose job it is to protect the Corporate Fossil Energy System and all of it's huge long line of money making subsidiaries that have mandated to the Patent Office to throw out as 'Crackpot' anything achieving over 80% efficiency, either as a motor, generator or battery, and frankly, we have electric motors that will power themselves and provide work energy, we have generators that produce energy with up to a 500% increase in usable current and beyond, and it is well known to insiders that battery companies long ago built a battery that would never lose it's charge but that would KILL their sales volume and pricing structures.
3 / 5 (4) Feb 04, 2016
For understanding the article: zero points.

This is not about corporate fossil fuels. This is about a problem in the peer review during post-publication discovery of errors system of journals.

This is also not an issue of 'mainstream science'. What you fail to understand is that journals are not run by scientists. They are profit oriented businesses. Once printed there is only negative profit in changing/addressing any problems in a back issue for a publishing company.

That these problems should be addressed swiftly is not a question. That they aren't is not a scientific but a profit problem. Make it profitable to change errors and it will happen (though I have no idea how one would go about doing that other than by making huge PR campaigns of all the errors in publication X so that they lose reputation if they don't)

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.