This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

Why do rational people believe lies online? Research explains how misinformation spreads

social media
Credit: Unsplash/CC0 Public Domain

Misinformation is an unfortunate reality of social media. On any given day, visitors to Facebook, Twitter, Instagram and other websites can find made-up "facts" about anything from vaccines to the war in Ukraine to climate change.

While some people can easily tell the difference between truth and fiction, others can't.

How does a seemingly rational person come to believe in ?

That's a question being answered by "PolyGraphs: Combatting Networks of Ignorance in the Misinformation Age."

Spanning three departments—philosophy, economics and computer science—at Northeastern University London, the project uses to help us learn more about how knowledge flows within a social media community.

Now two years in, the researchers have launched an interactive website and made some impressive discoveries, including insight into how and why rational people can come to believe the wrong thing.

The project uses artificial data, as well as data from real social networks like Facebook and Twitter, to create simulated communities wherein each individual is told to choose between A and B. (B is the correct choice—but the community doesn't know it.) The agents in the community collect their own evidence, share it with others and change their beliefs. Then, the researchers look at whether the community collectively reaches the correct conclusion, and how long it takes.

Simulations similar to drug trials

If that sounds abstract, Amil Mohanan offers up a scenario not unlike what agents in the simulations face. Mohanan, an assistant professor in philosophy at Northeastern University London, likens the simulations to drug trials being performed by a community of doctors. In a simulation, each doctor is given drug A or drug B to test.

"We know that B is slightly better, but the doctors in the community that we've simulated don't," he says.

When the doctors do trials, they discover that drug B is better, and share their findings with their neighbors. Depending on a variety of factors, some of the doctors will change their beliefs depending on what they learn from others. If all goes well, they will eventually reach a consensus that drug B is better.

But how long will that take?

"We're measuring, do the communities figure out that B is better, and how long does it take them?" Mohanan says. They conduct thousands of iterations of the simulations, changing different parameters like the size and shape of the network to figure out how long it takes for everyone to agree on drug B.

For example, in one type of network, information only flows between each agent and two others, creating a circle of sharing. In another, one person shares information with the rest of the group, which shares information back. In another, everyone shares information with everyone. These types of networks mimic the ones we see in real life, or online.

Small lies can have a big impact

Another factor that impacts the results is misinformation. Small lies can have a big impact, Mohanan has found, depending on factors like how sparse or dense the network is.

And lying can look like different things in different simulations. Doctors can pretend they've conducted a trial and that they know drug A is better. They can pick one at random and make up data. Or they can lie and go with drug A because it's the one they know best.

"The overall outcomes are very different" for each scenario, Mohanan says. After thousands of iterations of the simulation, the team has discovered that these seeds of doubt can create a large impact.

Other findings have been unexpected, as well. For one thing, the team has discovered that sometimes, when people share more information with each other, it can actually have a deleterious effect, delaying the consensus. This is known as the Zollman Effect, the theory that more connectivity is more likely to lead to a mistaken belief.

"Rational agents in a network like this can end up ignorant more often or more likely to fail to arrive at the true answer to the question, if they talk to one another more," says Brian Ball, head of faculty in philosophy at Northeastern University London.

They also found that when members of the community don't trust those whose beliefs are different from their own, this can lead to a lack of consensus, leaving the community polarized.

Ignorance among rational subjects

Above all, they are out to prove that ignorance can pervade even among rational subjects. People may perceive those who "get it wrong" as unintelligent or prejudiced. However, "We show that actually, people can get it wrong when it has nothing to do with that," Ball says.

Instead, people can be misled through no fault of their own, depending on the structure of the social network.

"What it might have something to do with is how well connected they are in communities, and more broadly what their informational environment looks like," Ball says.

Ball hopes that these discoveries can be useful in a variety of settings, including social media, , and non-profit organizations that are geared toward fighting misinformation online.

If you're worried about what you see on social media, Mohanan has some reassuring words. Generally, "the truth comes out over time."

Citation: Why do rational people believe lies online? Research explains how misinformation spreads (2023, July 3) retrieved 27 April 2024 from https://phys.org/news/2023-07-rational-people-online-misinformation.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Social media 'trust' or 'distrust' buttons could reduce spread of misinformation

15 shares

Feedback to editors