Trio offer tips for politicians on how to interpret scientific claims

November 21, 2013 by Bob Yirka, report

( —William Sutherland and David Spiegelhalter, a zoologist and mathematician respectively, with the University of Cambridge in the U.K along with Mark Burgman an ecologist with the University of Melbourne, have together compiled a list of tips for politicians and policymakers—the aim is to offer a means for those in charge of governmental decision-making, a way to interpret scientific claims for themselves, rather than having to rely on others. Their list of tips has been published as a Comment piece in the journal Nature.

The writers contend that politicians lack the skills necessary for properly interpreting scientific claims made by those doing research, and because of that are generally not in a position to judge whether such claims are accurate. This, they say is a serious problem because the manner in which policymakers earmark funds, set up rules and either support an effort or rally against it tends to not only impact research efforts but also .

At issue, is the imperfect nature of science and the humanness of those engaged in trying to understand it—a contradiction that can often lead to confusion in how to interpret results, the trio contend. The twenty tips they've come up, which include such gems as suggestions that readers understand that bias can creep into even the best research efforts or that correlation does not imply causation—a phrase heard over and over again in the scientific community, are meant to serve as a collection of warnings of what to watch out for, as much as roadmap to better understanding what is being read when trying to interpret claims made by scientists—many of whom may have ulterior motives.

The authors clearly do not mean to offend—they readily acknowledge that as a group are generally smart people, who really do want to do what's best. The problem is, they suggest, many are not willing to dig deep enough to find out what is really going on with scientific claims, or prefer to ignore evidence altogether for other political reasons. In either case, they suggest that knowledge is power, and their tips can offer just that—in the form of guidelines that can help separate the reality of scientific research results from the rhetoric.

Explore further: Opinion: Popular Science is wrong to get rid of online comments

More information: Policy: Twenty tips for interpreting scientific claims, ature 503, 335–337 (21 November 2013) … tific-claims-1.14183

Related Stories

Fact checking politicians gets results

October 18, 2013

New research indicates that American politicians are affected by the practice of fact-checking, thereby reducing the risk of misinformation and strengthening democratic accountability.

Libel case against the scientific journal Nature begins

November 14, 2011

( -- The British science journal Nature, which publishes both purely academic papers and editorial pieces, is being sued in a British court by a former editor of the theoretical physics journal Chaos, Solitons ...

Pomegranate juice claims deceptive, US rules

May 21, 2012

Pomegranate juice has not been proven to be an effective treatment for cancer, heart disease or erectile dysfunction, US regulators said Monday, calling a company's ad claims deceptive.

Politicians can use fear to manipulate the public

March 4, 2009

A new study in the American Journal of Political Science explores how and when politicians can use fear to manipulate the public into supporting policies they might otherwise oppose. Politicians' use of fear is more likely ...

Recommended for you


Adjust slider to filter visible comments by rank

Display comments: newest first

1 / 5 (7) Nov 21, 2013
Well, what are the tips? Just trust us a do what we say? We have superior judgment?
3.7 / 5 (3) Nov 21, 2013
Given the seemingly infinite amount of willful ignorance shown by Conservatives here, these tips have no chance of improving the constantly wrong decisions made by the Republican Party of No and the satellite TeaPublican party of Know nothing.
1.5 / 5 (8) Nov 21, 2013
Wonderful skepticism!

Differences and chance cause variation.
No measurement is exact.
Bias is rife.
Bigger is usually better for sample size.
Correlation does not imply causation.
Regression to the mean can mislead.
Extrapolating beyond the data is risky.
Beware the base-rate fallacy.
Controls are important.
Randomization avoids bias.
Seek replication, not pseudoreplication.
Scientists are human.
Significance is significant.
Separate no effect from non-significance.
Effect size matters.
Study relevance limits generalizations.
Feelings influence risk perception.
Dependencies change the risks.
Data can be dredged or cherry picked.
Extreme measurements may mislead.

3 / 5 (2) Nov 21, 2013
More complete incoherence from Nikkie(incoherence)Tard

Anyone want to start taking bets as to when he will start talking in tongues?
not rated yet Nov 21, 2013
Well, what are the tips?

Hint: Follow the bouncing baby link at the bottom of the article.
2 / 5 (4) Dec 02, 2013
Well, what are the tips? Just trust us a do what we say? We have superior judgment?

How about: "Don't read the comments for sound policy advice"?

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.