Researchers find researchers overestimate soft-science results—US the worst offender

Aug 27, 2013 by Bob Yirka report

( —Researchers have found that authors of "soft science" research papers tend to overstate results more often than researchers in other fields. In their paper published in Proceedings of the National Academy of Sciences, Daniele Fanelli and John Ioannidis write that the worst offenders are in the United States.

In the science community, soft research has come to mean research that is done in areas that are difficult to measure— being the most well known. Science conducted on the ways people (or animals) respond in experiments is quite often difficult to reproduce or to describe in measureable terms. For this reason, the authors claim, research based on behavioral methodologies has been considered (for several decades) to be at higher risk of bias, than with other sciences. Such , they suggest, tend to lead to inflated claims of success.

The problem Fanelli and Ioannidis suggest is that in soft science there are more "degrees of freedom"—researchers have more room to engineer experiments that will confirm what they already believe to be true. Thus, success in such sciences is defined as meeting expectations, rather than reaching a clearly defined goal or even discovering something new.

The researchers came to these conclusions by locating and analyzing 82 recent meta-analyses (papers produced by researchers studying published research papers) in genetics and in that covered 1,174 studies. Including genetics allowed the duo to compare soft science studies with hard science studies as well as those that were a combination of the two.

In analyzing the data, the researchers found that researchers in the soft sciences tended to not only inflate their findings but to more often report that the outcome of their research matched their original . They also found that papers that listed researchers from the U.S. as leads tended to be the worst offenders. In their defense, the researchers suggest that the publish-or-perish atmosphere in the U.S. contributes to the problem as does difficulty in defining parameters of success in the soft sciences. The authors also noted that research efforts that included both hard and soft science were less likely than pure soft science efforts to lead to inflated results.

Explore further: Soft drink consumption linked to behavioral problems in young children

More information: US studies may overestimate effect sizes in softer research, Published online before print August 26, 2013, DOI: 10.1073/pnas.1302997110

Many biases affect scientific research, causing a waste of resources, posing a threat to human health, and hampering scientific progress. These problems are hypothesized to be worsened by lack of consensus on theories and methods, by selective publication processes, and by career systems too heavily oriented toward productivity, such as those adopted in the United States (US). Here, we extracted 1,174 primary outcomes appearing in 82 meta-analyses published in health-related biological and behavioral research sampled from the Web of Science categories Genetics & Heredity and Psychiatry and measured how individual results deviated from the overall summary effect size within their respective meta-analysis. We found that primary studies whose outcome included behavioral parameters were generally more likely to report extreme effects, and those with a corresponding author based in the US were more likely to deviate in the direction predicted by their experimental hypotheses, particularly when their outcome did not include additional biological parameters. Nonbehavioral studies showed no such "US effect" and were subject mainly to sampling variance and small-study effects, which were stronger for non-US countries. Although this latter finding could be interpreted as a publication bias against non-US authors, the US effect observed in behavioral research is unlikely to be generated by editorial biases. Behavioral studies have lower methodological consensus and higher noise, making US researchers potentially more likely to express an underlying propensity to report strong and significant findings.

Related Stories

Researchers claim NIH grant process is 'totally broken'

Dec 06, 2012

(Medical Xpress)—John Ioannidis, a researcher at Stanford University has, along with graduate student Joshua Nicholson, published a commentary piece in the journal Nature, taking the National Institutes of Health (NIH) ...

Researchers find gender bias in sexual cannibalism papers

Jan 17, 2013

(—A trio of biologists, Liam Dougherty, Emily Burdfield-Steel and David Shuker from the U.K.'s University of St Andrews, School of Biology, have found that when researchers write papers that are published in scientific ...

Do pressures to publish increase scientists' bias?

Apr 21, 2010

The quality of scientific research may be suffering because academics are being increasingly pressured to produce 'publishable' results, a new study suggests. A large analysis of papers in all disciplines shows that researchers ...

Recommended for you

Affirmative action elicits bias in pro-equality Caucasians

22 hours ago

New research from Simon Fraser University's Beedie School of Business indicates that bias towards the effects of affirmative action exists in not only people opposed to it, but also in those who strongly endorse equality.

Election surprises tend to erode trust in government

Jul 24, 2014

When asked who is going to win an election, people tend to predict their own candidate will come out on top. When that doesn't happen, according to a new study from the University of Georgia, these "surprised losers" often ...

Awarded a Pell Grant? Better double-check

Jul 23, 2014

(AP)—Potentially tens of thousands of students awarded a Pell Grant or other need-based federal aid for the coming school year could find it taken away because of a mistake in filling out the form.

User comments : 10

Adjust slider to filter visible comments by rank

Display comments: newest first

2.3 / 5 (27) Aug 27, 2013
also add so-called "climate researchers" to the list.
2.6 / 5 (15) Aug 27, 2013
I once new an M. D. who would write his papers before he would do the experiment - and, behold, his conclusions were correct.
1 / 5 (15) Aug 27, 2013
Soft science is done by Progressives to Prove Progressive ideology is superior. Since the ends justifies the means for Progressives, it is no surprise that they fudge(lie) and manipulate the results to in order to prove the falsehood that they are superior.

Progressives only lie, cheat, steal, better than any other group.
2.1 / 5 (14) Aug 27, 2013
Well, that's three dumbasses.....where's the rest? I thought they would be here by now.
1.8 / 5 (10) Aug 27, 2013
Is anyone surprised? I wish Physorg wouldn't publish these soft studies, especially psychology and sociology studies based on animals. My candidate for dumbest soft science study was one that proclaimed the existence of "Mental Time Travel" in birds. (A bird started its morning feeding in the same place where it found ants the evening before.)

For the record I'm an economist, and I'll admit most Econ articles are soft and self-important. Physorg would be better without most of these econ articles, which are widely available elsewhere. One of the reasons I read Physorg is because it's refreshing to read the important and amazing scientific advances made by biologists, chemists, engineers, physicists, etc.
3 / 5 (14) Aug 27, 2013
But ... isn't this study soft science and therefore the researchers finding this about studies like this means that...

Now my head hurts.
1 / 5 (9) Aug 28, 2013
Picking psychiatry as the representative soft science wasn't wise. Psychiatry is particularly prone to bias and subjectivity and in this regard the study was unfair to the soft sciences. It would have been better to pick standard psychology or behavioral economics or social psychology of something of that sort.
5 / 5 (3) Aug 28, 2013
the publish-or-perish atmosphere in the U.S.

That sort of pressure is (unfortunatly) not limited to the US. It's a global phenomenon.

Soft science results are nevertheless interesting. One shouldn't infer that because studies in soft sciences tend to overstate the case they are therefore completely false. (That would be, as funny as it may seem - a 'soft science' conclusion).
It simply means that one has to take these sort of papers cum grano salis.

It's also a lot harder to check results from soft science papers than from hard science papers. But I wouldn't go so far as to deduce from that that soft science authors work less dilligently.
3.7 / 5 (3) Aug 28, 2013
Soft science results are nevertheless interesting.

It's also a lot harder to check results from soft science papers than from hard science papers.

Language tells us a lot. Calling this a soft "science" give these very human endeavors a cache that sounds as if the practitioners know what they're talking about. In reality, it's all educated guess work and statistical analyses to show where things are at the moment.

These are definitely things worthy of study. These are definitely interesting topics. But they are not sciences. Call them sciences shows how little is actually understood about the subject.

The famed physicist and all around great guy, Richard Feynman would have had a good time ripping the term "soft science" to shreds.
5 / 5 (3) Aug 28, 2013
In reality, it's all educated guess work and statistical analyses to show where things are at the moment.

It's the same in the hard sciences. You ALWAYS work with educated guesses and statistical analyses - be it psychology or collider data.
The difference is - as the article points out - in the hard sciences you can more easily control the variables (an the number of variables). So the results in the soft sciences are always more prone to a larger error margin and/or some sort of involuntary bias.

These are definitely interesting topics. But they are not sciences.

They do use the scientific method. The problem is often that what they measure cannot be rigidly defined and is also often codependent on other factors outside the scope of the experiment. (It's relatively easy to define/test 'speed' as opposed to, say, 'intelligence'. )

For some applications, however, fuzzy terms (like intelligence) suffice. So a 'soft' study can deliver useful knowledge.