Poor transparency and reporting jeopardize the reproducibility of science

January 4, 2016

Reported research across the biomedical sciences rarely provides full protocol, data, and necessary level of transparency to verify or replicate the study, according to two articles publishing in PLOS Biology as part of a new Meta-Research Section, on January 4th, 2016. The authors argue that the information publicly available on reported research is in dire need of improvement.

Authors of one study, Shareen Iqbal from Emory University, John Ioannidis from the Meta-Research Innovation Center at Stanford (METRICS) and colleagues, analyzed a corpus of papers published between 2000 and 2014 to determine the extent researchers report key information necessary for properly evaluating and replicating published research, including availability of protocols, data, and the frequency of published novel or replication studies. The authors were surprised by the results: out of 441 articles drawn from across the biomedical literature, only one paper provided a full protocol and no paper made all the data available. The majority of studies didn't state funding or conflicts of interest and replication studies were very rare.

"We hope our survey will further sensitize scientists, funders, journals and other science-related stakeholders about the need to improve these indicators," the authors stated.

A related study, led by Ulrich Dirnagl and team at Charité Universitätsmedizin in Berlin, Germany, examined hundreds of published stroke and cancer research experiments and found that the vast majority don't contain sufficient information about how many animals were used. What's more, in many papers animals "vanished" over the course of the study. Using a computer model, the team simulated the effects of such animal loss on the validity of the experiments. They found that the more animals lost or removed, the shakier or more biased the experimental conclusions.

"The study began with an attempt to look at the robustness of findings in a handful of preclinical papers" explains first author Constance Holman, "but the sheer number of missing animals stopped us in our tracks". In human medicine, publishing a clinical trial without information about the number of patients, or how many dropped out or died over the course of a study would be unthinkable. But nobody had looked carefully at whether animal numbers are properly reported in .

Billions of dollars are wasted every year on research that cannot be reproduced. The findings of these two studies join a long list of concerns about bias and reporting in basic research. However, they also establish ways in which research can become more transparent and potentially more reproducible.

Explore further: Bias pervades the scientific reporting of animal studies

More information: Holman C, Piper SK, Grittner U, Diamantaras AA, Kimmelman J, Siegerink B, et al. (2016) Where Have All the Rodents Gone? The effects of Attrition in Experimental Research on Cancer and Stroke. PLoS Biol 14(1): e1002331.DOI: 10.1371/journal.pbio.1002331

Related Stories

Bias pervades the scientific reporting of animal studies

July 17, 2013

A new study published in the open access journal PLOS Biology suggests that the scientific literature could be compromised by substantial bias in the reporting of animal studies, and may be giving a misleading picture of ...

Inaccurate reporting jeopardizing clinical trials

April 26, 2015

The team led by Dr Sheena Cruickshank of the Faculty of Life Sciences and Professor Andy Brass from the School of Computer Science analysed 58 papers on research into inflammatory bowel disease published between 2000 and ...

Recommended for you

133 million-year-old dinosaur brain fossil found in England

October 28, 2016

Soft tissues such as hearts and muscles are very rarely preserved in the fossil record. For that reason, nearly all study of dinosaur soft tissue has to be reconstructed from fossil bones. However, researchers in the United ...

Science: Public interest high, literacy stable

October 28, 2016

While public interest in science continues to grow, the level of U.S. scientific literacy remains largely unchanged, according to a survey by the University of Michigan Institute for Social Research.

Experts uncover hidden layers of Jesus' tomb site

October 27, 2016

In the innermost chamber of the site said to be the tomb of Jesus, a restoration team has peeled away a marble layer for the first time in centuries in an effort to reach what it believes is the original rock surface where ...

Important ancient papyrus seized from looters in Israel

October 27, 2016

(Phys.org)—Eitan Klein, a representative of the Israel Antiquities Authority, has announced that an important papyrus document dated to 2,700 years ago has been seized from a group of Palestinian looters who reportedly ...


Adjust slider to filter visible comments by rank

Display comments: newest first

1 / 5 (3) Jan 04, 2016
One suspects that, if such critical information was missing, those trying to reproduce the "experiments" listed would have noticed that in setting up. Instead, they went ahead and carried out the "experiment" without what supposedly was important material. And the fact that they didn't reproduce the original "results" is the only "evidence" they had that something was fluky. How did all of those providing those false results think they would get away with it? It invokes the article also on today's marquee, "Why too much evidence can be a bad thing", claiming that, if every data point of an "experiment" agrees, then the "conclusion" is likely wrong. It looks very much like a major effort is underway to whitewash the fact that "science" has been lying for a century or more.
2.3 / 5 (3) Jan 04, 2016
This problem has been known for decades. It persists because peer reviewers for journals routinely give thumbs-up to poorly written research reports.
These journals should die away in favor of free websites that report research results.
1 / 5 (2) Jan 04, 2016
This explains all the shoddy global warming research over the decades.
not rated yet Jan 05, 2016
I read for an hour or two each day research journals. I rarely bother with the research and data description. Partly that is because it is often irrelevant but also because when I want to check, it usually lacks the details I seek. It seems to me these sections are a ritual in which researchers game peer review by providing a fog of details without thinking about actually they are reporting to their readers. The bug is that if they gave good details, reviewers would find more to find fault and so lesson the chances of paper acceptance. Better to badly report and hope the reviewers will just judge on the number of pages given to a section than provide well thought out details and risk a reviewer giving a careful analysis.
5 / 5 (1) Jan 05, 2016
There's a few factors at work here:
1) Space on papers is limited. You have to cram all your results into a certain amount of pages. So you concentrate on methods and results. If any space is left over you give a bit of review of state of the art so that others see you aren't - unintentionally - duplicating work already done (i.e. that your work is original)
2) Data may not be public. Studies in biomedicine are for the overwhelming part funded (at least partially) by a company. Often with the aim to find out if their drug X is effective or not. No way are they going to put the data out to the public (read: competitors) for free. Studies cost way too much money and give way too much a time advantage over the competition for a company to simply give that up.
They found that the more animals lost or removed, the shakier or more biased the experimental conclusions.

No wonder. Animal 'loss' means less data (statistical power)...or a bad experimental setup.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.