Overhype and 'research laundering' are a self-inflicted wound for social science

Overhype and 'research laundering' are a self-inflicted wound for social science
Credit: AI-generated image (disclaimer)

Earlier this fall, Dartmouth College researchers released a study claiming to link violent video games to aggression in kids. The logic of a meta-analytic study like this one is that by combining many individual studies, scientists can look for common trends or effects identified in earlier work. Only, as a psychology researcher who's long focused on this area, I contend this meta-analysis did nothing of the sort. In fact, the magnitude of the effect they found is about the same as that of eating potatoes on teen suicide. If anything, it suggests video games do not predict youth aggression.

This study, and others like it, are symptomatic of a big problem within : the overhyping of dodgy, unreliable research findings that have little real-world application. Often such findings shape public perceptions of the human condition and guide public policy – despite largely being rubbish. Here's how it happens.

The last few years have seen psychology, in particular, embroiled in what some call a reproducibility crisis. Many long-cherished findings in social science more broadly have proven difficult to replicate under rigorous conditions. When a study is run again, it doesn't turn up the same results as originally published. The pressure to publish positive findings and the tendency for researchers to inject their own biases into analyses intensify the issue. Much of this failure to replicate can be addressed with more transparent and rigorous methods in social science.

But the overhyping of weak results is different. It can't be fixed methodologically; a solution would need to come from a cultural change within the field. But incentives to be upfront about shortcomings are few, particularly for a field such as psychology, which worries over public perception.

One example is the Implicit Association Test (IAT). This technique is most famous for probing for unconscious racial biases. Given the attention it and the theories based upon it have received, something of a cottage industry has developed to train employees about their implicit biases and how to overcome them. Unfortunately, a number of studies suggest the IAT is unreliable and doesn't predict real-world behavior. Combating racial bias is laudatory, but the considerable public investment in the IAT and the concept of is likely less productive than advertised.

Part of the problem is something I call "death by press release." This phenomenon occurs when researchers or their university, or a journal-publishing organization such as the American Psychological Association, releases a press release that hypes a study's findings without detailing its limitations. Sensationalistic claims tend to get more news attention.

For instance, one now notorious food lab at Cornell experienced multiple retractions after it came out that they tortured their data in order to get headline-friendly conclusions. Their research suggested that people ate more when served larger portions, action television shows increased food consumption, and kids' vegetable consumption would go up if produce was rebranded with kid-friendly themes such as "X-ray vision carrots." Mainly, lab leader Brian Wansink appears to have become an expert in marketing social science, even though most of the conclusions were flimsy.

Another concern is a process I call "science laundering" – the cleaning up of dirty, messy, inconclusive science for public consumption. In my own area of expertise, the Dartmouth meta-analysis on video games is a good example. Similar evidence to what had been fed into the meta-analysis had been available for years and actually formed the basis for why most scholars no longer link violent games to youth assaults.

Science magazine recently discussed how meta-analyses can be misused to try to prematurely end scientific debates. Meta-analyses can be helpful when they illuminate scientific practices that may cause spurious effects, in order to guide future research. But they can artificially smooth over important disagreements between studies.

Let's say we hypothesize that eating blueberries cures depression. We run 100 studies to test this hypothesis. Imagine about 25 percent of our experiments find small links between blueberries and reduced depression, whereas the other 75 percent show nothing. Most people would agree this is a pretty poor showing for the blueberry hypothesis. The bulk of our evidence didn't find any improvement in depression after eating the berries. But, due to a quirk of meta-analysis, combining all 100 of our studies together would show what scientists call a "statistically significant" effect – meaning something that was unlikely to happen just by chance – even though most of the individual studies on their own were not statistically significant.

Merging together even a few studies that show an effect with a larger group of studies that don't can end up with a meta-analysis result that looks statistically significant – even if the individual studies varied quite a bit. These types of results constitute what some psychologists have called the "crud factor" of psychological research – statistically significant findings that are noise, not real effects that reflect anything in the real world. Or, put bluntly, meta-analyses are a great tool for scholars to fool themselves with.

Professional guild organizations for fields such as psychology and pediatrics should shoulder much of the blame for the spread of research overhyping. Such organizations release numerous, often deeply flawed, policy statements trumpeting research findings in a field. The public often does not realize that such organizations function to market and promote a profession; they're not neutral, objective observers of scientific research – which is often published, for income, in their own journals.

Unfortunately, such science laundering can come back to haunt a field when overhyped claims turn out to be misleading. Dishonest overpromotion of social can cause the public and the courts to grow more skeptical of it. Why should taxpayers fund research that is oversold rubbish? Why should media consumers trust what research says today if they were burned by what it said yesterday?

Individual scholars and the professional guilds that represent them can do much to fix these issues by reconsidering lax standards of evidence, the overselling of weak effects, and the current lack of upfront honesty about methodological limitations. In the meantime, the public will do well to continue applying a healthy dose of critical thinking to lofty claims coming from press releases in the social sciences. Ask if the magnitude of effect is significantly greater than for potatoes on suicide. If the answer is no, it's time to move on.

Provided by The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: Overhype and 'research laundering' are a self-inflicted wound for social science (2018, October 25) retrieved 23 April 2024 from https://phys.org/news/2018-10-overhype-laundering-self-inflicted-wound-social.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Children's violent video game play associated with increased physical aggressive behavior

3 shares

Feedback to editors