Poor transparency and reporting jeopardize the reproducibility of science

Reported research across the biomedical sciences rarely provides full protocol, data, and necessary level of transparency to verify or replicate the study, according to two articles publishing in PLOS Biology as part of a new Meta-Research Section, on January 4th, 2016. The authors argue that the information publicly available on reported research is in dire need of improvement.

Authors of one study, Shareen Iqbal from Emory University, John Ioannidis from the Meta-Research Innovation Center at Stanford (METRICS) and colleagues, analyzed a corpus of papers published between 2000 and 2014 to determine the extent researchers report key information necessary for properly evaluating and replicating published research, including availability of protocols, data, and the frequency of published novel or replication studies. The authors were surprised by the results: out of 441 articles drawn from across the biomedical literature, only one paper provided a full protocol and no paper made all the data available. The majority of studies didn't state funding or conflicts of interest and replication studies were very rare.

"We hope our survey will further sensitize scientists, funders, journals and other science-related stakeholders about the need to improve these indicators," the authors stated.

A related study, led by Ulrich Dirnagl and team at Charité Universitätsmedizin in Berlin, Germany, examined hundreds of published stroke and cancer research experiments and found that the vast majority don't contain sufficient information about how many animals were used. What's more, in many papers animals "vanished" over the course of the study. Using a computer model, the team simulated the effects of such animal loss on the validity of the experiments. They found that the more animals lost or removed, the shakier or more biased the experimental conclusions.

"The study began with an attempt to look at the robustness of findings in a handful of preclinical papers" explains first author Constance Holman, "but the sheer number of missing animals stopped us in our tracks". In human medicine, publishing a clinical trial without information about the number of patients, or how many dropped out or died over the course of a study would be unthinkable. But nobody had looked carefully at whether animal numbers are properly reported in .

Billions of dollars are wasted every year on research that cannot be reproduced. The findings of these two studies join a long list of concerns about bias and reporting in basic research. However, they also establish ways in which research can become more transparent and potentially more reproducible.

Explore further

Bias pervades the scientific reporting of animal studies

More information: Holman C, Piper SK, Grittner U, Diamantaras AA, Kimmelman J, Siegerink B, et al. (2016) Where Have All the Rodents Gone? The effects of Attrition in Experimental Research on Cancer and Stroke. PLoS Biol 14(1): e1002331.DOI: 10.1371/journal.pbio.1002331
Journal information: PLoS Biology

Citation: Poor transparency and reporting jeopardize the reproducibility of science (2016, January 4) retrieved 15 October 2019 from https://phys.org/news/2016-01-poor-transparency-jeopardize-science.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Feedback to editors

User comments

Jan 04, 2016
One suspects that, if such critical information was missing, those trying to reproduce the "experiments" listed would have noticed that in setting up. Instead, they went ahead and carried out the "experiment" without what supposedly was important material. And the fact that they didn't reproduce the original "results" is the only "evidence" they had that something was fluky. How did all of those providing those false results think they would get away with it? It invokes the article also on today's marquee, "Why too much evidence can be a bad thing", claiming that, if every data point of an "experiment" agrees, then the "conclusion" is likely wrong. It looks very much like a major effort is underway to whitewash the fact that "science" has been lying for a century or more.

Jan 04, 2016
This problem has been known for decades. It persists because peer reviewers for journals routinely give thumbs-up to poorly written research reports.
These journals should die away in favor of free websites that report research results.

Jan 04, 2016
This explains all the shoddy global warming research over the decades.

Jan 05, 2016
I read for an hour or two each day research journals. I rarely bother with the research and data description. Partly that is because it is often irrelevant but also because when I want to check, it usually lacks the details I seek. It seems to me these sections are a ritual in which researchers game peer review by providing a fog of details without thinking about actually they are reporting to their readers. The bug is that if they gave good details, reviewers would find more to find fault and so lesson the chances of paper acceptance. Better to badly report and hope the reviewers will just judge on the number of pages given to a section than provide well thought out details and risk a reviewer giving a careful analysis.

Jan 05, 2016
There's a few factors at work here:
1) Space on papers is limited. You have to cram all your results into a certain amount of pages. So you concentrate on methods and results. If any space is left over you give a bit of review of state of the art so that others see you aren't - unintentionally - duplicating work already done (i.e. that your work is original)
2) Data may not be public. Studies in biomedicine are for the overwhelming part funded (at least partially) by a company. Often with the aim to find out if their drug X is effective or not. No way are they going to put the data out to the public (read: competitors) for free. Studies cost way too much money and give way too much a time advantage over the competition for a company to simply give that up.
They found that the more animals lost or removed, the shakier or more biased the experimental conclusions.

No wonder. Animal 'loss' means less data (statistical power)...or a bad experimental setup.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more