Study finds scientific reproducibility does not equate to scientific truth

science communication
Credit: CC0 Public Domain

Reproducible scientific results are not always true and true scientific results are not always reproducible, according to a mathematical model produced by University of Idaho researchers. Their study, which simulates the search for that scientific truth, will be published Wednesday, May 15, in the journal PLOS ONE.

Independent confirmation of scientific results—known as —lends credibility to a researcher's conclusion. But researchers have found the results of many well-known cannot be reproduced, an issue referred to as a "."

"Over the last decade, people have focused on trying to find remedies for the 'replication crisis,'" said Berna Devezer, lead author of the study and U of I associate professor of marketing in the College of Business and Economics. "But proposals for remedies are being accepted and implemented too fast without solid justifications to support them. We need a better theoretical understanding of how operates before we can provide reliable remedies for the right problems. Our model is a framework for studying science."

Devezer and her colleagues investigated the relationship between reproducibility and the discovery of scientific truths by building a that represents a working toward finding a scientific . In each simulation, the scientists are asked to identify the shape of a specific polygon.

The modeled scientific community included multiple scientist types, each with a different research strategy, such as performing highly innovative experiments or simple replication experiments. Devezer and her colleagues studied whether factors like the makeup of the community, the complexity of the polygon and the rate of reproducibility influenced how fast the community settled on the true polygon shape as the scientific consensus and the persistence of the true polygon shape as the .

Within the model, the rate of reproducibility did not always correlate with the probability of identifying the truth, how fast the community identified the truth and whether the community stuck with the truth once they identified it. These findings indicate reproducible results are not synonymous with finding the truth, Devezer said.

Compared to other , highly innovative research tactics resulted in a quicker discovery of the truth. According to the study, a diversity of research strategies protected against ineffective research approaches and optimized desirable aspects of the scientific process.

Variables including the makeup of the community and complexity of the true polygon influenced the speed scientists discovered the truth and persistence of that truth, suggesting the validity of scientific results should not be automatically blamed on questionable research practices or problematic incentives, Devezer said. Both have been pointed to as drivers of the "replication crisis."

"We found that, within the model, some research strategies that lead to reproducible results could actually slow down the scientific process, meaning reproducibility may not always be the best—or at least the only—indicator of good science," said Erkan Buzbas, U of I assistant professor in the College of Science, Department of Statistical Science and a co-author on the paper. "Insisting on reproducibility as the only criterion might have undesirable consequences for scientific progress."


Explore further

Can flipping coins replace animal experiments?

More information: PLOS ONE (2019). journals.plos.org/plosone/arti … journal.pone.0216125
Journal information: PLoS ONE

Citation: Study finds scientific reproducibility does not equate to scientific truth (2019, May 15) retrieved 16 July 2019 from https://phys.org/news/2019-05-scientific-equate-truth.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
3679 shares

Feedback to editors

User comments

May 15, 2019
The scientific method requires observation and testablity. Generally, testability means reproducibility. So, a fact may exist that is not reproducible, but is also outside the realm of science. That is, science cannot discover all truth. Logic cannot discover all truth.

May 15, 2019
This is good news because as we know from reading the comments sections here at physorg, all the coolest scientific phenomena are absolutely not reproducible. Heck theyre usually not even observable.

May 15, 2019
But the TRUTH remains in spite of scientific consensus toward the more popular hypotheses/theories at the expense of any up-and-coming alternatives that are equally valid and testable. It most often hinges on the willing acceptance of something new even if that something may go against some points that were previously found acceptable by the scientific community.
And what is accepted in total by the scientific community is also found to be acceptable by the pseudoscientists such as are found on the internet. However, unless that something is readily accepted by scientists/researchers along with peer review and published papers that so many count on for the verified data, the internet pseudoscientists refuse to even consider any alternative to what they already know from reading published authors. It often borders on insanity how prospective alternatives are either brushed off as of no value because it doesn't kowtow to the known theories that older theorists have published.

May 16, 2019
The paper is good, it uses Markov chain methods and agent based modeling to assess what happens. The main find of the paper is not that any one research strategy is better, but that using all of them is, as measured against all properties (such as speed of discovery, reproducibility, et cetera). It is unsurprising that not all scientists should work on replication, say.

@Pooua: It seems to me you try to inject philosophy, into an analysis of science. That has not worked for the last 3,000 years, but especially for the 400 years we have had science. For example, facts has nothing to do with that you can assign truth values to them. You can assign truth values within any (reasonable) model you make.

It is easy to point to facts that are known but not observable by their very nature, such as the non-existence of negative mass.

May 18, 2019
LOL This whole article and the lead author is full of BS.

"Over the last decade, people have focused on trying to find remedies for the 'replication crisis,'" said Berna Devezer, lead author of the study and U of I associate professor of marketing in the College of Business and Economics. "But proposals for remedies are being accepted and implemented too fast without solid justifications to support them. We need a better theoretical understanding of how science operates before we can provide reliable remedies for the right problems. Our model is a framework for studying science." says the "associate professor of MARKETING".

Evidently, she doesn't understand that THE SCIENTIFIC METHOD DEMANDS that experiments be reproducible as well as TRUTHFUL in order to duplicate or replicate the step-by-step method that resulted in a reproducible experiment. It is within that reproducibility that the TRUTH lies FOR THAT PARTICULAR RESULT. The RESULT itself provides the Truth.

May 18, 2019
It works in physics.

Just sayin'.

May 18, 2019
It works in physics.

Just sayin'.
.....but I remind you that Pop-Cosmology is 90% non-physics, you know, like the Pop-Cosmology fantasy that infinite gravity exists on a finite stellar mass they've labeled black holes, a fantasy in 100% violation of the Immutable Inverse Square Law for Gravity of REAL PHYSICS.

May 19, 2019
wonder if these guys works for the church or the social sciences (or both). No wonder it makes so many humans happy, as we aren't a truth friendly specie.

May 19, 2019
So, if this is about real scientific discoveries (I won't say "truth;" it's too loaded a concept) then how come they didn't track the significance values?

I'd like to know if this is still correct for high sigma values, and if increasing them (using more trials, of course) affects the measurements. For that matter, I'd like to see the sigma values for the tests they performed. These data are not in the paper.

I think it's more bullshit; we got a business major and a philosopher along with a statistician and a computer science major. Not a physicist in sight. Looks like a bunch of sociologists, and they're the ones with the trouble. How come they don't point out that the greater significance of physics results is due to higher statistical significance and excoriate the sociologists for relying on low significance results?

It seems pretty obvious to me. The sociologists have been trying to compete with physics and have failed miserably. See the Sokal Affair.

May 19, 2019
As far as I can see, the sociologists and philosophers have tried for a hundred years to pretend their results are as reliable as those of physics, and failed miserably. Derrida was the first one to try to turn it inside out and question physics; he failed miserably but all the sociologists and philosophers started attacking physics using his methods. Physics is not a "narrative;" you got the data or you don't. The Sokal Affair clinched it; an article that never would have passed peer review in physics was published by the sociologists. They've been frantically trying to cover the mess in the litter box ever since.

May 20, 2019
It is this kind of finding that gives science a bad name. Obviously it is not true science, which what ever its failings and inexactness, is the method by which we come as close to reality and truth as we possibly can. In many cases this knowledge tells us about the subject being studied to a degree that is not only useful, but also it enables us to better understand the wider nature of our subject and where it fits into a bigger picture.

May 20, 2019
Sometimes, some of your apples are oranges; other times, all of your oranges are apples.

I don't buy it.

According to their own claim, what they're saying doesn't need to be reproducible to be true; and it may not be true even if it's reproducible.

Their own result states their own result can't be trusted.

May 20, 2019
Wait a minute. So, doing the same thing over and over and expecting different results is not insanity?

May 20, 2019
If your results are only 2 sigma, you might need to do it again a thousand times. Or a million.

May 20, 2019
1) Anybody with a brain that isn't blinkered knows there are things that are true that aren't reproducible. Science was not, and should not be, intended to be the sole arbiter of ALL truth. 2) Scientists have known since they were still "natural philosophers" that mere reproduction of results is only part of the method, akin to mere inductive logic, which science was designed to go beyond. Using deductive reasoning to design "highly innovative experiments" as well as reproducing previous results has always been part of the method. 4) I'm highly skeptical of anything which talks about "complexity" of polygons, unless perhaps it involves multiple dimensions. True complexity (as distinguished from "order" or "intricacy") involves systematic dynamics. 5) Pretty much everything the other skeptics of this paper wrote.

May 24, 2019
You forgot 3).

May 26, 2019
It seems pretty obvious to me. The sociologists have been trying to compete with physics and have failed miserably. See the Sokal Affair.


Concerning the Sokal Affair, Bruno Latour, whom wrote We Have Never Been Modern opened an unintentional can of worms, but his protege Isabelle Stengers, whom has advanced degrees in Science, chemistry to be sure, has a lot of valuable things to say about this very topic, as such, she is not as easy to dismiss. While the Journal that published nonsense and was a major aspect of the Sokal Affair deserved its comeuppance, that ought not to invalidate all of Sociology nor the Philosophy of Science, as such, and again I highly recommend Stengers to all those interested in the very topics poorly broached by this article.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more