Twitter bots for good: Study reveals how information spreads on social media

September 22, 2017, University of Southern California

After an election year marked by heated exchanges and the distribution of fake news, Twitter bots earned a bad reputation—but not all bots are bad, suggests a new study co-authored by Emilio Ferrara, a USC Information Sciences Institute computer scientist and a research assistant professor at the USC Viterbi School of Engineering's Department of Computer Science.

In a large-scale experiment designed to analyze the spread of on social networks, Ferrara and a team from the Technical University of Denmark deployed a network of algorithm-driven Twitter accounts, or social bots, programmed to spread positive messages on Twitter.

"We found that bots can be used to run interventions on that trigger or foster good behaviors," says Ferrara, whose previous research focused on the proliferation of bots in the election campaign.

But it also revealed another intriguing pattern: information is much more likely to become viral when people are exposed to the same piece of information multiple times through multiple sources.

"This milestone shatters a long-held belief that ideas spread like an infectious disease, or contagion, with each exposure resulting in the same probability of infection," says Ferrara.

"Now we have seen empirically that when you are exposed to a given piece of information multiple times, your chances of adopting this information increase every time."

To reach these conclusions, the researchers first developed a dozen positive hashtags, ranging from health tips to fun activities, such as encouraging users to get the flu shot, high-five a stranger and even Photoshop a celebrity's face onto a turkey at Thanksgiving.

Then, they designed a network of 39 bots to deploy these hashtags in a synchronized manner to 25,000 real followers during a four-month period from October to December 2016.

Each bot automatically recorded when a target user retweeted intervention-related content and also each exposure that had taken place prior to retweeting. Several hashtags received more than one hundred retweets and likes, says Ferrara.

"We also saw that every exposure increased the probability of adoption - there is a cumulative reinforcement effect," says Ferrara.

"It seems there are some cognitive mechanisms that reinforce your likelihood to believe in or adopt a piece of information when it is validated by multiple sources in your social network."

This mechanism could explain, for example, why you might take one friend's movie recommendation with a grain of salt. But the probability that you will also see that movie increases cumulatively as each additional friend makes the same recommendation.

Aside from revealing the hidden dynamics that drive human behavior online, this discovery could also improve how positive intervention strategies are deployed on social networks in many scenarios, including public health announcements for disease control or emergency management in the wake of a crisis.

"The common approach is to have one broadcasting entity with many followers, but this study implies that it would be more effective to have multiple, decentralized bots share synchronized content," says Ferrara.

He adds that many communities are isolated from certain accounts due to Twitter's echo chamber effect: social media users tend to be exposed to content from those whose views match their own.

"What if there is a health crisis and you don't follow the Centers for Disease Control and Prevention account? By taking a grassroots approach, we could break down the silos of the echo chamber for the greater good," says Ferrara.

The study, entitled "Evidence of complex contagion of information in social media: An experiment using Twitter bots," was published in PLOS ONE on Sept. 22.

Explore further: Why was MacronLeaks' influence limited in the French election?

More information: Bjarke Mønsted et al. Evidence of complex contagion of information in social media: An experiment using Twitter bots, PLOS ONE (2017). DOI: 10.1371/journal.pone.0184148

Related Stories

Fake Tweets, real consequences for the election

November 4, 2016

Software robots masquerading as humans are influencing the political discourse on social media as never before and could threaten the very integrity of the 2016 U.S. presidential election, said Emilio Ferrara, a computer ...

Recommended for you

Electrode shape improves neurostimulation for small targets

April 24, 2018

A cross-like shape helps the electrodes of implantable neurostimulation devices to deliver more charge to specific areas of the nervous system, possibly prolonging device life span, says research published in March in Scientific ...

China auto show highlights industry's electric ambitions

April 22, 2018

The biggest global auto show of the year showcases China's ambitions to become a leader in electric cars and the industry's multibillion-dollar scramble to roll out models that appeal to price-conscious but demanding Chinese ...

After Facebook scrutiny, is Google next?

April 21, 2018

Facebook has taken the lion's share of scrutiny from Congress and the media about data-handling practices that allow savvy marketers and political agents to target specific audiences, but it's far from alone. YouTube, Google ...

Robot designed for faster, safer uranium plant pipe cleanup

April 21, 2018

Ohio crews cleaning up a massive former Cold War-era uranium enrichment plant in Ohio plan this summer to deploy a high-tech helper: an autonomous, radiation-measuring robot that will roll through miles of large overhead ...

How social networking sites may discriminate against women

April 20, 2018

Social media and the sharing economy have created new opportunities by leveraging online networks to build trust and remove marketplace barriers. But a growing body of research suggests that old gender and racial biases persist, ...

2 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

MR166
4 / 5 (1) Sep 23, 2017
"We found that bots can be used to run interventions on social media that trigger or foster good behaviors,"

Well I suppose one would have to define good behavior vs bad behavior. Which government agency should be in charge of that determination?
MR166
1 / 5 (1) Sep 23, 2017
This is just another case of trying to make the end justify the means. Creating a false impression of public opinion via a program can never be justified.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.