Exposure of faked dishonesty study makes me proud to be a behavioural scientist

Exposure of faked dishonesty study makes me proud to be a behavioural scientist
Credit: Chokniti Khongchum from Pexels

The story has a lot to recommend it: psychologist Dan Ariely, the author of a bestselling book on the behavioral science of dishonesty, retracts his study because the data was faked. No wonder it's been picked up by the world's media. Buzzfeed declared this "the latest blow to the buzzy field of behavioral economics." Psychologist Stuart Ritchie, himself a scientist, wrote about the case under the headline: "Never trust a scientist".

I worry about these interpretations. And not because I teach on a behavioral master's program. I worry because headlines like this risk stoking anti-science sentiment at a time when faith in experts is low, when thoughtful people parrot that we live in a "post-truth world" and where mistrust of science is causing deaths.

But most of all, I worry about these interpretations because I take the opposite conclusion from this story. In this case, the lesson is that the scientific process actually worked well.

Casting doubt on the science

An important and overlooked detail is that the scientific process revealed years ago that the results of the paper didn't hold. Using data provided by an insurance company, Ariely's study claimed that people are more honest in their reports if they sign a declaration of truthfulness at the beginning of a document rather than at the end of it. The method was adopted by the IRS, the US tax collection agency, and at least one big insurance company.

While nobody expressed concerns of deliberate fraud, many research teams had reported their failed attempts to replicate the initial studies. Replication is important. Because science is rooted in probability, observing the same result on two independent occasions makes it far less likely that the result is a fluke.

In 2020, Ariely and his co-authors published a paper in which they themselves attempted and failed to replicate the initial results. At that time it had not yet emerged that the initial data had been faked. The authors concluded that the initial results were a fluke and titled the follow up paper: "Signing at the beginning versus at the end does not decrease dishonesty."

Another striking feature is that the failed replications were published in one of the top general science journals. It's a recent development that scientists would devote their time to replication studies—and that top journals would devote precious column inches to publishing them—and follows a series of statistical studies that cast doubt on the rigor of published science.

First was the provocative data simulation study that suggested more than half of published results of scientific research are false. This finding derives from the following three features:

  • Some results are flukes.
  • New results are being found all the time.
  • Unexpected and eye-catching results are more likely to be published.
  • Then there was the Many Labs replication project. It found that more than half the results published in top psychology journals couldn't be replicated.

    Exposing false results

    Some insightful contributions come from behavioral science, which comprises several disciplines that look at human behavior and interaction, and works at the intersection of statistics, economics and psychology. One of those insights was that scientists can publish false results even without knowing it.

    To get a sense of this, you first need to know that the scientific community deems that a result provides evidence if the result passes a threshold. That threshold is measured as a p-value, with p standing for probablity. Lower p-values indicate more reliable results. A result passes the threshold into reliable evidence or, in the language of science, is statistically significant, if its p-value is below some threshold, for example, p < 0.05.

    Intentionally or otherwise, researchers inflate the chances of attaining statistically significant results by engaging in questionable research practices. In a survey published in 2012, a majority of psychologists reported that they test their theory by measuring more than one outcome and then report the results only on the outcome which attains statistical significance. Presumably they admitted to this behavior because they didn't recognize that it inflates the chance of drawing an incorrect conclusion.

    Uri Simonsohn, Leif Nelson and Joe Simmons, a trio of behavioral scientists who are routinely described as "data detectives," devised a test to ascertain whether a conclusion is likely to have derived from questionable research practices. The test examines whether the evidence that supports a claim is suspiciously clustered just below the threshold of statistical significance.

    It was this test that debunked the idea of "power posing"—the widely publicized claim that you can perform better in if you adopt an assertive physical posture, such as hands on hips.

    Now the three data detectives have done it again. It was on their blog that the stark and sensational facts of Ariely's dishonesty study were exposed. Contrary to Buzzfeed's claim that this case constitutes a blow to behavioral economics, it in fact demonstrates how behavioral science has led us to root out phony results. Exposing that bad apple, and the fascinating techniques employed to do it, actually constitutes a victory for behavioral scientists.

    Provided by The Conversation

    This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

    Citation: Exposure of faked dishonesty study makes me proud to be a behavioural scientist (2021, September 9) retrieved 24 April 2024 from https://phys.org/news/2021-09-exposure-faked-dishonesty-proud-behavioural.html
    This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

    Explore further

    A new replication crisis: Research that is less likely to be true is cited more

    143 shares

    Feedback to editors