Is the p-value pointless?

March 17, 2016 by Lauren Richardson, Plos Blogs
Is the p-value pointless?
This Figure, from the Head et al., "Perspective on p-hacking," shows that p-hacking alters the distribution of p-values in the range considered “statistically significant”

For the first time in its 177-year history, the American Statistical Association (ASA) has voiced its opinion and made specific recommendations for a statistical practice. The subject of their ire? The (arguably) most common statistical output, the p-value. The p-value has long been the primary metric for demonstrating that study results are "statistically significant," usually by achieving the semi-arbitrary value of p
ce of the p-value has been greatly overstated and the scientific community has become over-reliant on this one – flawed – measure.

In the associated article, published in The American Statistician, Ronald Wasserstein and Nicole Lazar explain how the dependence on the p-value threatens the reproducibility and replicability of research. Importantly, the p-value does not prove that scientific conclusions are true and does not signify the importance of a result. As Wasserstein says in the ASA press release, "The p-value was never intended to be a substitute for scientific reasoning."

As documented in a 2015 PLOS Biology Perspective by Megan Head, Michael Jennions and colleagues, the p-value is subject to a common type of manipulation known as "p-hacking," where researchers selectively report datasets or analyses that achieve a "significant" result. The authors of this Perspective used a text-mining protocol to reveal this to be a widespread issue across multiple scientific disciplines. The authors also provide helpful recommendations for researchers and journals.

The problem with the p-value cuts both ways. Over-interpretation of the p-value can lead to both false positives and false negatives. Dependence on a specific p-value can lead to bias as researchers may discontinue or shelve work that doesn't meet this arbitrary standard.

The hope is that the ASA statement will increase awareness of the problems of inappropriate p-value use persisting in the scientific practice. Their guidelines can aid researchers in determining the best practices for the use of the p-value, and help identify when other statistical tests are more appropriate.

Explore further: American Statistical Association releases statement on statistical significance and p-values

Related Stories

Scientists unknowingly tweak experiments

March 18, 2015

A new study has found some scientists are unknowingly tweaking experiments and analysis methods to increase their chances of getting results that are easily published.

Experimental economics: Results you can trust

March 3, 2016

Reproducibility is an important measure of validity in all fields of experimental science. If researcher A publishes a particular scientific result from his laboratory, researcher B should be able to follow the same protocol ...

Recommended for you

How to cut your lawn for grasshoppers

November 22, 2017

Picture a grasshopper landing randomly on a lawn of fixed area. If it then jumps a certain distance in a random direction, what shape should the lawn be to maximise the chance that the grasshopper stays on the lawn after ...

Plague likely a Stone Age arrival to central Europe

November 22, 2017

A team of researchers led by scientists at the Max Planck Institute for the Science of Human History has sequenced the first six European genomes of the plague-causing bacterium Yersinia pestis dating from the Late Neolithic ...

Ancient barley took high road to China

November 21, 2017

First domesticated 10,000 years ago in the Fertile Crescent of the Middle East, wheat and barley took vastly different routes to China, with barley switching from a winter to both a winter and summer crop during a thousand-year ...

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

douglasjbender
not rated yet Apr 20, 2016
Why is it that I could recognize this problem immediately, back in the day as a first-semester student in Statistics, but for almost 200 years the "best and brightest" in Mathematics and the sciences could or did not? (I'm not claiming to be smarter than others; but likely I am less impressed with "authority" or "bull-sh*tting" than most.)

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.