Poor transparency and reporting jeopardize the reproducibility of science

January 4, 2016

Reported research across the biomedical sciences rarely provides full protocol, data, and necessary level of transparency to verify or replicate the study, according to two articles publishing in PLOS Biology as part of a new Meta-Research Section, on January 4th, 2016. The authors argue that the information publicly available on reported research is in dire need of improvement.

Authors of one study, Shareen Iqbal from Emory University, John Ioannidis from the Meta-Research Innovation Center at Stanford (METRICS) and colleagues, analyzed a corpus of papers published between 2000 and 2014 to determine the extent researchers report key information necessary for properly evaluating and replicating published research, including availability of protocols, data, and the frequency of published novel or replication studies. The authors were surprised by the results: out of 441 articles drawn from across the biomedical literature, only one paper provided a full protocol and no paper made all the data available. The majority of studies didn't state funding or conflicts of interest and replication studies were very rare.

"We hope our survey will further sensitize scientists, funders, journals and other science-related stakeholders about the need to improve these indicators," the authors stated.

A related study, led by Ulrich Dirnagl and team at Charité Universitätsmedizin in Berlin, Germany, examined hundreds of published stroke and cancer research experiments and found that the vast majority don't contain sufficient information about how many animals were used. What's more, in many papers animals "vanished" over the course of the study. Using a computer model, the team simulated the effects of such animal loss on the validity of the experiments. They found that the more animals lost or removed, the shakier or more biased the experimental conclusions.

"The study began with an attempt to look at the robustness of findings in a handful of preclinical papers" explains first author Constance Holman, "but the sheer number of missing animals stopped us in our tracks". In human medicine, publishing a clinical trial without information about the number of patients, or how many dropped out or died over the course of a study would be unthinkable. But nobody had looked carefully at whether animal numbers are properly reported in .

Billions of dollars are wasted every year on research that cannot be reproduced. The findings of these two studies join a long list of concerns about bias and reporting in basic research. However, they also establish ways in which research can become more transparent and potentially more reproducible.

Explore further: Bias pervades the scientific reporting of animal studies

More information: Holman C, Piper SK, Grittner U, Diamantaras AA, Kimmelman J, Siegerink B, et al. (2016) Where Have All the Rodents Gone? The effects of Attrition in Experimental Research on Cancer and Stroke. PLoS Biol 14(1): e1002331.DOI: 10.1371/journal.pbio.1002331

Related Stories

Bias pervades the scientific reporting of animal studies

July 17, 2013

A new study published in the open access journal PLOS Biology suggests that the scientific literature could be compromised by substantial bias in the reporting of animal studies, and may be giving a misleading picture of ...

Inaccurate reporting jeopardizing clinical trials

April 26, 2015

The team led by Dr Sheena Cruickshank of the Faculty of Life Sciences and Professor Andy Brass from the School of Computer Science analysed 58 papers on research into inflammatory bowel disease published between 2000 and ...

Recommended for you

Metacognition training boosts gen chem exam scores

October 20, 2017

It's a lesson in scholastic humility: You waltz into an exam, confident that you've got a good enough grip on the class material to swing an 80 percent or so, maybe a 90 if some of the questions go your way.

Scientists see order in complex patterns of river deltas

October 19, 2017

River deltas, with their intricate networks of waterways, coastal barrier islands, wetlands and estuaries, often appear to have been formed by random processes, but scientists at the University of California, Irvine and other ...

Six degrees of separation: Why it is a small world after all

October 19, 2017

It's a small world after all - and now science has explained why. A study conducted by the University of Leicester and KU Leuven, Belgium, examined how small worlds emerge spontaneously in all kinds of networks, including ...

Ancient DNA offers new view on saber-toothed cats' past

October 19, 2017

Researchers who've analyzed the complete mitochondrial genomes from ancient samples representing two species of saber-toothed cats have a new take on the animals' history over the last 50,000 years. The data suggest that ...

5 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

julianpenrod
1 / 5 (3) Jan 04, 2016
One suspects that, if such critical information was missing, those trying to reproduce the "experiments" listed would have noticed that in setting up. Instead, they went ahead and carried out the "experiment" without what supposedly was important material. And the fact that they didn't reproduce the original "results" is the only "evidence" they had that something was fluky. How did all of those providing those false results think they would get away with it? It invokes the article also on today's marquee, "Why too much evidence can be a bad thing", claiming that, if every data point of an "experiment" agrees, then the "conclusion" is likely wrong. It looks very much like a major effort is underway to whitewash the fact that "science" has been lying for a century or more.
edshort4
2.3 / 5 (3) Jan 04, 2016
This problem has been known for decades. It persists because peer reviewers for journals routinely give thumbs-up to poorly written research reports.
These journals should die away in favor of free websites that report research results.
dlethe
1 / 5 (2) Jan 04, 2016
This explains all the shoddy global warming research over the decades.
Squirrel
not rated yet Jan 05, 2016
I read for an hour or two each day research journals. I rarely bother with the research and data description. Partly that is because it is often irrelevant but also because when I want to check, it usually lacks the details I seek. It seems to me these sections are a ritual in which researchers game peer review by providing a fog of details without thinking about actually they are reporting to their readers. The bug is that if they gave good details, reviewers would find more to find fault and so lesson the chances of paper acceptance. Better to badly report and hope the reviewers will just judge on the number of pages given to a section than provide well thought out details and risk a reviewer giving a careful analysis.
antialias_physorg
5 / 5 (1) Jan 05, 2016
There's a few factors at work here:
1) Space on papers is limited. You have to cram all your results into a certain amount of pages. So you concentrate on methods and results. If any space is left over you give a bit of review of state of the art so that others see you aren't - unintentionally - duplicating work already done (i.e. that your work is original)
2) Data may not be public. Studies in biomedicine are for the overwhelming part funded (at least partially) by a company. Often with the aim to find out if their drug X is effective or not. No way are they going to put the data out to the public (read: competitors) for free. Studies cost way too much money and give way too much a time advantage over the competition for a company to simply give that up.
They found that the more animals lost or removed, the shakier or more biased the experimental conclusions.

No wonder. Animal 'loss' means less data (statistical power)...or a bad experimental setup.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.