Researchers suggest lack of published null result papers skews reliability of those that are published

Aug 29, 2014 by Bob Yirka report
Researchers suggest lack of published null result papers skews reliability of those that are published
Credit: Linnaeus University

(Phys.org) —A trio of researchers at Stanford University has shined a light on a problem many in the social science research arena are aware of but tend to ignore: the problem of null result papers not being written or published. In their paper published in the journal Science, Annie Franco, Neil Malhotra and Gabor Simonovits suggest that not publishing null result papers produces a bias in the literature, skewing the reliability of papers with strong results that are published. Jeffery Mervis offers an In Depth piece on the team's work in the same journal edition.

What should a social scientist do if he or she comes up with a hypotheses regarding human behavior, designs a way to test it, runs that test, but then learns that nothing new has been found by doing so? At first blush, it might seem logical to toss the idea into the trash, or file drawer and move on to something else, which is exactly what a lot of researchers do, the researchers with this new effort report. After all, if you don't find anything relevant or pertinent, others might think you didn't actually accomplish anything, so why would you write up a paper describing what happened and submit it to a journal?

The answer lies in the domain of published results, if respected journals only ever publish strong result papers, an impression is created that only research that provides strong results is important, which of course is nonsense. It also leaves the science open to wasted effort when other researchers come up with the same hypotheses and the same result.

To learn more about the problem, the researchers pulled data from TESS, an online program that allows researchers to get data from surveys that have been conducted as part of research efforts sponsored by the National Science Foundation. The team found that only 48 percent of studies begun were completed, so they contacted the study leaders to find out what happened to those that weren't represented. Their work revealed that just 20 percent of null result papers wound up being published, and that an astounding 65 percent of the null result studies had even resulted in a written paper—the researchers had simply walked away. When asked why, many suggested that to do so would be wasted effort as there would be little interest by journals.

The researchers suggest that perhaps a new repository be set up for the placement of null result papers, one that would be accessible by other . That would allow for a future scenario when a scientist could ask their computer about an idea, and get back a history of the research surrounding it, rather than a skewed list that shows only the work of successful endeavors.

Explore further: When 'exciting' trumps 'honest', traditional academic journals encourage bad science

More information: Publication bias in the social sciences: Unlocking the file drawer, Science DOI: 10.1126/science.1255484

ABSTRACT
We study publication bias in the social sciences by analyzing a known population of conducted studies—221 in total—where there is a full accounting of what is published and unpublished. We leverage TESS, an NSF-sponsored program where researchers propose survey-based experiments to be run on representative samples of American adults. Because TESS proposals undergo rigorous peer review, the studies in the sample all exceed a substantial quality threshold. Strong results are 40 percentage points more likely to be published than null results, and 60 percentage points more likely to be written up. We provide not only direct evidence of publication bias, but also identify the stage of research production at which publication bias occurs—authors do not write up and submit null findings.

add to favorites email to friend print save as pdf

Related Stories

Recommended for you

Study: Alcatraz inmates could have survived escape

7 hours ago

The three prisoners who escaped from Alcatraz in one of the most famous and elaborate prison breaks in U.S. history could have survived and made it to land, scientists concluded in a recent study.

User comments : 15

Adjust slider to filter visible comments by rank

Display comments: newest first

mahi
1 / 5 (4) Aug 29, 2014
That's what one would conclude from just common sense!
antialias_physorg
4 / 5 (8) Aug 29, 2014
The problem isn't really 'little interest by journals', but that there is a limited space in journals (or conference proceedings). And when you have to choose what to publish and what to reject it's pretty natural to go for papers that demonstrate something new over those that don't.
(This doesn't mean that null papers aren't important or that they shouldn't be written/published. Personally I think it's very importnat to do so to prevent duplicated effort. It's just a reasonable rationale why they usually aren't chosen by journals for inclusion...and knowing that: why many researchers don't bother writing them. If I knew a paper has only a 30% chance of being picked up I wouldn't invest the month to write it, either.)
Jixo
2.5 / 5 (8) Aug 29, 2014
That's what one would conclude from just common sense!
I don't think so. In certain areas of research the negative results aren't common due to descriptive or trivial "Duh" character of that research. Such an research may not be wrong - it just brings no risk for researchers (which is probably why these researchers focused to it). The descriptive research is indeed important too, but when most of scientists will do it exclusively, then the progress stops. Instead of it, the lack of replications indicates the bias of the whole community at the first line (cold fusion as a most important example). I'm indeed not talking about replications of the findings of garage researchers which aren't documented well often - but about normal findings published in standard way.
Jixo
1.9 / 5 (9) Aug 29, 2014
Actually the current situation is, many researchers are reluctant to publishing of null results just from fear, they would be rendered untrustworthy in the eyes of others, i.e. from exactly the opposite reason, than the above study implicates. The frontier research of many boundary phenomena tends to be noisy until you don't know, how to adjust the conditions of experiments. The level of experimental noise therefore works in both directions. Too much of noise or too many negative results indicates, that the research is suspicious or clueless in the same way, like the way too pretty / smooth results.
Jixo
2 / 5 (8) Aug 29, 2014
The problem isn't really 'little interest by journals', but that there is a limited space in journals (or conference proceedings).
The problem is on the side of both journals, both researchers. In particular, the ignorance of cold fusion research or scalar waves isn't really a problem of journals, but the fact, (nearly) nobody wants to do this research. But we have many replications published in non-mainstream journals, therefore once you don't adhere on the highest impact journal, nothing will actually prohibit you to publish many studies about it.

It's true, that the adherence on high impact publishing skews the willingness of researchers to publishing in lower profile journals, because they will not get so great appreciation with grant agencies for future. But this factor is not so limiting for theorists, which don't need high investments for their research. For example the string theorists based their hype mostly on the articles, which were presented at ArXiv.org only.
antialias_physorg
4.3 / 5 (11) Aug 29, 2014
but the fact, (nearly) nobody wants to do this research

So do the research. What's stopping you? You obviously want to. And anyhow: the stuff does, by your claims, already work. So why does it need research?

In certain areas of research the negative results aren't common

News flash: Only a very small percentage of what you try in research actually pans out. Negative results are the overwhelming majority of day-to-day research. If you wrote a paper everytime something doesn't work you'd never get anything done.

Writing papers is a drag. It's not fun. It keeps you from doing what you're paid for. It's writing stuff down that you already did (and the fun in being a scientist is doing what you - nor anyone else - has never done before).
Budgets and timelines never include the months or so to write/publish. You do it for your career (to gather impact factor on your way to professorship) or because you must (i.e. because your professor tells you to)
mahi
not rated yet Aug 29, 2014
That's what one would conclude from just common sense!
I don't think so. In certain areas of research the negative results aren't common due to descriptive or trivial "Duh" character of that research. Such an research may not be wrong - it just brings no risk for researchers (which is probably why these researchers focused to it). The descriptive research is indeed important too, but when most of scientists will do it exclusively, then the progress stops. Instead of it, the lack of replications indicates the bias of the whole community at the first line (cold fusion as a most important example). I'm indeed not talking about replications of the findings of garage researchers which aren't documented well often - but about normal findings published in standard way.


I suppose common sense is relative. And that is the reason why even trivial things need costly research.
Modernmystic
3 / 5 (2) Aug 29, 2014
Writing papers is a drag. It's not fun. It keeps you from doing what you're paid for.


Sounds like the medical field :)

Anti,

What you said earlier about limited space in journals; I'm curious is that a logistical problem in getting enough people for peer review or are you talking about a physical limitation on space? I ask because with electronic media actual space wouldn't be a problem. If you could elaborate it would be appreciated. Thanks!
julianpenrod
1.7 / 5 (12) Aug 29, 2014
antalias_physorg, along with their trained stable of dutiful 5 point approval givers, gives a determined show of trying to "justify" "science" being a deceitful construct only of successful ventures, even when they have to lie about it. The old joke associated with Surveyor goes that a reporter asks, "What if, when it lands, Surveyor promptly sinks in the lunar soil?" and a scientists replies, "Well, we would have learned something right there." How interesting that antalias_physorg mendaciously places economic interests of magazines, as honest as any corporation's economic declarations can be, as an "excuse" for not revealing truth! Why don't they provide more than the calculatedly "limited space"? Or why don't the "scientists" design a system of encapsulating results, even of a hypothesis not working, that is compact, easy and reliable? They're "scientists", after all!
antialias_physorg
3.9 / 5 (7) Aug 29, 2014
I'm curious is that a logistical problem in getting enough people for peer review or are you talking about a physical limitation on space?

It's a bit of a mix. Journals have a limited length. Publishers of journals want to make money. They're not in it for the science. Good journals have a high impact factor associated with them so it's in their interest to not accept every paper that comes along but only the best (and 'best' in science means something that is truly novel...which null-hypothesis papers by definition aren't so much).

getting enough people for peer review

There are enough people to peer review.
Note. Peer review is not paid for by publishers. They just send out papers to other researchers in the field and let them spend a few days burning their own cash to do reviews. It's not uncommon to be handed a handful of papers per month and spend 2-3 days per month on review alone. This is a "courtesy service" of researchers for each other.
antialias_physorg
3.9 / 5 (7) Aug 29, 2014
[cont]
For conference proceedings the space is limited by the number of speakers at the associated conference. Depending on conferency type rejections can be as high as 40-50% (which may not sound a lot but remember that you only even consider writing a paper for a conference if you expect to get in. No one writes 'on the off chance' that it gets accepted. a good conference paper takes more than a week to write to the exclusion of everything else - and no one has that kind of time to waste).
I've seen null hypothsis paers get published - but that is rare.

Sounds like the medical field

Well, I come from a med-tech field. But from interacting with other researchers (from mathematics to nanotechnology to robotics to agriculture) it seems to be pretty much the same drag everywhere. It probably gets fun when you're professor and start writing books.
antialias_physorg
4 / 5 (8) Aug 29, 2014
antalias_physorg, along with their trained stable of dutiful 5 point approval givers

Envious? Poor baby. Here: Have a 5. On the house. It means nothing.

How interesting that antalias_physorg mendaciously places economic interests of magazines, as honest as any corporation's economic declarations can be, as an "excuse" for not revealing truth!

I don't like the way papers are published by companies that don't care about science any more than you do. Mostly because you have to sign away the rights to your own article and even pay to get 'extras' (like getting color images printed in your own article instead of B/W).

However, the people who do peer review are researchers (and not associated with the publishers) - so the quality of selection of what goes in and what doesn't is all right.

It's not an excuse. It's just the way it is.
antialias_physorg
4 / 5 (8) Aug 29, 2014
I ask because with electronic media actual space wouldn't be a problem.

Here's my 2 cents on that. I love arxiv and the idea of Open Access. But they are not without their faults.
I realize that peer review is important. Just having a bag of papers without any kind of quality control is not good enough. There are cranks out there who'd flood these portals. There are papers where you go "there's some systemic problem here" - and it's good that these papers don't pass peer review.

Papers are supposed to inform other researchers. That means *especially* PhD students. They must be able to rely on the papers they read in journals/proceedings to be top notch to get a solid understanding of the field and for their own research. They have neither the time (nor the discerning abilty) to wade through piles of junk (or even just low-level research).
TimLong2001
1 / 5 (1) Aug 29, 2014
I. e., the null result for the existence of the neutrino in a Los Alamos National Laboratory experiment, even though given a short column report in Science, was notably ignored by the scientific establishment.
Mike_Massen
5 / 5 (2) Aug 30, 2014
Assumptions play a lot in the issue.
Eg it had been KNOWN that lipids & water don't mix, pretty trivial, put up with it, no point 'wasting' time to research it - the assumption it was based upon issues of polar vs non-polar etc

But, one particular Australian :-) Scientist chose to investigate, even starting such research met with some derision.

He did achieve comparative greatness, the simple reason oils & water don't mix is due to the dissolved gases in water, take them out then oils & water mix fine !

Such a simple issue, properly addressed & understood with great detail has led to advances in drug delivery. Oily liquid drugs can now be fully mixed with water for far better absorption/safety in the body.

All this happened fairly recently, google is your friend if interested so hey don't take my word for it, do your own peer review of my posting here on this thread :-)

99% of all scientists that ever lived (in all of history) are alive today, yet we obviously don't have enough !

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.