Twitter bots for good: Study reveals how information spreads on social media

September 22, 2017, University of Southern California

After an election year marked by heated exchanges and the distribution of fake news, Twitter bots earned a bad reputation—but not all bots are bad, suggests a new study co-authored by Emilio Ferrara, a USC Information Sciences Institute computer scientist and a research assistant professor at the USC Viterbi School of Engineering's Department of Computer Science.

In a large-scale experiment designed to analyze the spread of on social networks, Ferrara and a team from the Technical University of Denmark deployed a network of algorithm-driven Twitter accounts, or social bots, programmed to spread positive messages on Twitter.

"We found that bots can be used to run interventions on that trigger or foster good behaviors," says Ferrara, whose previous research focused on the proliferation of bots in the election campaign.

But it also revealed another intriguing pattern: information is much more likely to become viral when people are exposed to the same piece of information multiple times through multiple sources.

"This milestone shatters a long-held belief that ideas spread like an infectious disease, or contagion, with each exposure resulting in the same probability of infection," says Ferrara.

"Now we have seen empirically that when you are exposed to a given piece of information multiple times, your chances of adopting this information increase every time."

To reach these conclusions, the researchers first developed a dozen positive hashtags, ranging from health tips to fun activities, such as encouraging users to get the flu shot, high-five a stranger and even Photoshop a celebrity's face onto a turkey at Thanksgiving.

Then, they designed a network of 39 bots to deploy these hashtags in a synchronized manner to 25,000 real followers during a four-month period from October to December 2016.

Each bot automatically recorded when a target user retweeted intervention-related content and also each exposure that had taken place prior to retweeting. Several hashtags received more than one hundred retweets and likes, says Ferrara.

"We also saw that every exposure increased the probability of adoption - there is a cumulative reinforcement effect," says Ferrara.

"It seems there are some cognitive mechanisms that reinforce your likelihood to believe in or adopt a piece of information when it is validated by multiple sources in your social network."

This mechanism could explain, for example, why you might take one friend's movie recommendation with a grain of salt. But the probability that you will also see that movie increases cumulatively as each additional friend makes the same recommendation.

Aside from revealing the hidden dynamics that drive human behavior online, this discovery could also improve how positive intervention strategies are deployed on social networks in many scenarios, including public health announcements for disease control or emergency management in the wake of a crisis.

"The common approach is to have one broadcasting entity with many followers, but this study implies that it would be more effective to have multiple, decentralized bots share synchronized content," says Ferrara.

He adds that many communities are isolated from certain accounts due to Twitter's echo chamber effect: social media users tend to be exposed to content from those whose views match their own.

"What if there is a health crisis and you don't follow the Centers for Disease Control and Prevention account? By taking a grassroots approach, we could break down the silos of the echo chamber for the greater good," says Ferrara.

The study, entitled "Evidence of complex contagion of information in social media: An experiment using Twitter bots," was published in PLOS ONE on Sept. 22.

Explore further: Why was MacronLeaks' influence limited in the French election?

More information: Bjarke Mønsted et al. Evidence of complex contagion of information in social media: An experiment using Twitter bots, PLOS ONE (2017). DOI: 10.1371/journal.pone.0184148

Related Stories

Fake Tweets, real consequences for the election

November 4, 2016

Software robots masquerading as humans are influencing the political discourse on social media as never before and could threaten the very integrity of the 2016 U.S. presidential election, said Emilio Ferrara, a computer ...

Recommended for you

Physicists reveal why matter dominates universe

March 21, 2019

Physicists in the College of Arts and Sciences at Syracuse University have confirmed that matter and antimatter decay differently for elementary particles containing charmed quarks.


Adjust slider to filter visible comments by rank

Display comments: newest first

4 / 5 (1) Sep 23, 2017
"We found that bots can be used to run interventions on social media that trigger or foster good behaviors,"

Well I suppose one would have to define good behavior vs bad behavior. Which government agency should be in charge of that determination?
1 / 5 (1) Sep 23, 2017
This is just another case of trying to make the end justify the means. Creating a false impression of public opinion via a program can never be justified.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.