Criteria for funding and promotion lead to bad science
Scientists are trained to carefully assess theories by designing good experiments and building on existing knowledge. But there is growing concern that too many research findings may in fact be false. New research publishing 10 November in open-access journal PLOS Biology by psychologists at the universities of Bristol and Exeter suggests that this may happen because of the criteria used in funding science and promoting scientists which, they say, place too much weight on novel, eye-catching findings.
Some scientists are becoming concerned that published results are inaccurate—a recent attempt by 270 scientists to reproduce the findings reported in 100 psychology studies the Reproducibility Project: Psychology found that only about 40 per cent could be reproduced.
This latest study shows that we shouldn't be surprised by this, because researchers are incentivised to work in a certain way if they want to further their careers, such as running a large number of small studies, rather than a smaller number of larger, more definitive ones. But while this might be good for their careers, it won't necessarily be good for science.
Professor Marcus Munafò and Dr Andrew Higginson, researchers in psychology at the universities of Bristol and Exeter, concluded that scientists aiming to progress should carry out lots of small, exploratory studies because this is more likely to lead to surprising results. The most prestigious journals publish only highly novel findings, and scientists often win grants and get promotions if they manage to publish just one paper in these journals, which means that these small (but unreliable) studies may be disproportionately rewarded in the current system.
The authors used a mathematical model to predict how an optimal researcher who is trying to maximise the impact of their publications should spend their research time and effort. Scientific researchers have to decide what proportion of time to invest in looking for exciting new results rather than confirming previous findings. They also must decide how much resource to invest in each experiment.
The model shows that the best thing for career progression is carry out lots of small exploratory studies and no confirmatory ones. Even though each experiment is less likely to identify a real effect if it's there, they are likely to get some false positives, which unfortunately are often published too.
Dr Higginson said: "This is an important issue because so much money is wasted doing research from which the results can't be trusted; a significant finding might be just as likely to be a false positive as actually be measuring a real phenomenon."
This wouldn't happen if more publications, rather than one or two high profile ones, mattered to scientists' careers, nor if novel findings weren't prized so much more than confirmatory work that confirms previous findings, say the researchers.
So is there any way to overcome this problem of bad scientific practice? There could be immediate solutions, as Professor Munafò explained: "Journal editors and reviewers could be much stricter about good statistical procedures, such as insisting on large sample sizes and tougher statistical criteria for deciding whether an effect has been found."
There are already some encouraging signs - for example, a number of journals are introducing reporting checklists which require authors to state, among other things, how they decided on the sample size they used. Funders are also making similar changes to grant application procedures.
"The best thing for scientific progress would be a mixture of medium-sized exploratory studies with large confirmatory studies," said Dr Higginson. "Our work suggests that researchers would be more likely to do this if funding agencies and promotion committees rewarded asking important questions and good methodology, rather than surprising findings and exciting interpretations."