Exacerbating the replication crisis in science: Replication studies are often unwelcome

April 11, 2017

Researchers in London have investigated 1151 psychology journals and found that just 3% state that they welcome scientists to submit replication studies for publication. In replication studies, scientists try to replicate the findings of previous studies to verify that their results are robust and correct.

Publishing research findings in is a career yardstick for many scientists, where the number and perceived impact of their publications are considered important measures of career success. This view is also frequently taken by the funding bodies and tenure committees that provide academic researchers with hotly contested research funding and job security.

Scientific journals provide a platform for scientists to communicate their research. The impact of published papers is often gauged by how many times they are cited (or referenced) by other papers, the idea being that such citations indicate that other scientists are building on the original work.

Scientific journals are also judged on the citations their published manuscripts receive, leading to metrics such as the impact factor. Put simply, the impact factor represents the average number of citations an article in a journal receives in a year. Journals with higher impact factors are often considered to be more desirable places to publish, although the impact factor has sometimes been criticized as an inaccurate measurement of quality.

The tendency for many journals to accept only papers that report positive and original findings has been termed publication bias. In traditional academic publishing, a small minority of submitted studies are accepted for publication, based on their perceived significance or originality, or if they confirm an existing hypothesis. Studies that are thought to provide only a small advance, or those that present "non-impactful" or negative results are frequently rejected.

In fact, many journals explicitly state in their aims or guide to authors that high levels of significance or originality are a prerequisite for publication. Journal editors often enforce extremely high rejection rates, the idea being that selective publishing will increase the 's .

However, this traditional publishing model has drawn criticism that it may exclude studies with perceived low impact, but which are valuable for scientific integrity and development. Replications are one such type of study, where researchers attempt to replicate the findings of previous studies to verify that their results are robust and correct.

It is important that scientific experiments are repeatable and produce identical or similar results if repeated. Otherwise, it is difficult to know if experimental results are correct and reveal a real phenomenon, or if they are merely a one-off caused by experimental error or highly specific conditions that are difficult to recreate.

"Science progresses through and contradiction. The former builds the body of evidence, the latter determines whether such a body exists," explains Professor Neil Martin, of Regent's University London, lead author on the study, recently published in Frontiers in Psychology.

Recently, a so-called "replication crisis" has been brewing in science. This is occurring across various fields, but the issue has recently come to a head following some high-profile failed replications in psychology. "Researchers have been accused of various creative methodological misdemeanors which may have led to false-positive results being published," says Martin. "We're still uncovering questionable research practices in some well-known historical studies, and I would not be surprised to see many others emerging."

There is a growing awareness and discussion in the wider scientific community that replications are not performed or published enough. This could potentially result in whole areas of scientific research being constructed on foundations of sand.

There are many reasons for the lack of replication studies in various fields of science. The overriding scientific culture is one of innovation, originality and discovery, and scientists may be reluctant to conduct "housekeeping" replication studies, when resources are limited.

Another potential reason is the perception that replications have limited impact, and will therefore be difficult to publish. "Journals have been criticized for not readily accepting replications, but the basis for this criticism is anecdotal," explains Martin.

To begin to quantify this phenomenon, Martin and Richard Clarke, a research student at the London School of Hygiene and Tropical Medicine, investigated how welcoming psychology journals are to publishing replications. "We wanted to investigate whether journals specifically rejected (or did not recommend) the submission of replications. We did this by examining the aims and instructions to authors of 1151 journals in psychology," says Martin.

The team found that only 3% of the psychology journals on their list explicitly stated that they accepted replications. There was no difference between high and low impact journals and no difference between the different branches of psychology.

33% of the journals emphasized the need for scientific originality in submissions, which discourages scientists to submit replications, while 63% neither encouraged nor discouraged replications. The remaining 1% actively discouraged replications.

So how do we make psychology journals more welcoming to replications? "We've suggested that all journals in should state that they accept replications that are positive and negative," explains Martin. "Researchers could also submit two papers for publication when they submit original research: one which reports the original results and one replication which acts as a test of the original findings."

An alternative approach to traditional publishing, in the form of impact-neutral publishers, could also help to increase the number of published replications. Impact-neutral publishers, such as Frontiers, PLOS ONE and many BMC journals, don't make value judgements on perceived impact, and instead assess the scientific validity of a submitted study, when deciding whether to publish it. "If a study tests a hypothesis based on sound reasoning, using sound methodology with appropriate data analysis, those should be the absolute criteria for publishing," says Martin.

Explore further: Study details shortage of replication in education research

More information: G. N. Martin et al, Are Psychology Journals Anti-replication? A Snapshot of Editorial Practices, Frontiers in Psychology (2017). DOI: 10.3389/fpsyg.2017.00523

Related Stories

Study details shortage of replication in education research

August 14, 2014

Although replicating important findings is essential for helping education research improve its usefulness to policymakers and practitioners, less than one percent of the articles published in the top education research journals ...

A new approach to understanding research relevance

March 5, 2013

(Medical Xpress)—"Science is broken; let's fix it," says the University of Sydney's Associate Professor Alex Holcombe, who is part of a major new effort to improve the reliability of psychological research.

When finding 'nothing' means something

September 25, 2014

Scientists usually communicate their latest findings by publishing results as scientific papers in journals that are almost always accessible online (albeit often at a price), ensuring fast sharing of latest knowledge.

Recommended for you

Metacognition training boosts gen chem exam scores

October 20, 2017

It's a lesson in scholastic humility: You waltz into an exam, confident that you've got a good enough grip on the class material to swing an 80 percent or so, maybe a 90 if some of the questions go your way.

Scientists see order in complex patterns of river deltas

October 19, 2017

River deltas, with their intricate networks of waterways, coastal barrier islands, wetlands and estuaries, often appear to have been formed by random processes, but scientists at the University of California, Irvine and other ...

Six degrees of separation: Why it is a small world after all

October 19, 2017

It's a small world after all - and now science has explained why. A study conducted by the University of Leicester and KU Leuven, Belgium, examined how small worlds emerge spontaneously in all kinds of networks, including ...

Ancient DNA offers new view on saber-toothed cats' past

October 19, 2017

Researchers who've analyzed the complete mitochondrial genomes from ancient samples representing two species of saber-toothed cats have a new take on the animals' history over the last 50,000 years. The data suggest that ...

2 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Macrocompassion
not rated yet Apr 12, 2017
If it was real scientific knowledge which was being reproduced and better understood as a result, then the world is better off for it and the pride of the original discoverer and his degree of self-esteem are of relatively little importance. If on the other hand, the new science is not actually so new and was previously well known, there is no point in being proud in its re-discovery.
dudester
not rated yet Apr 12, 2017
I don't think you understand the article.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.