Fake Tweets, real consequences for the election

Software robots masquerading as humans are influencing the political discourse on social media as never before and could threaten the very integrity of the 2016 U.S. presidential election, said Emilio Ferrara, a computer scientist and research leader at the USC Information Sciences Institute (ISI), and USC Viterbi School of Engineering research assistant professor.

By leveraging state-of-the art bot detection algorithms, Ferrara and co-author Alessandro Bessi, a visiting research assistant at USC's ISI, have made a startling discovery: a surprisingly high percentage of the political discussion taking place on Twitter was created by pro-Donald Trump and pro-Hillary Clinton software robots, or social bots, with the express purpose of distorting the online discussion regarding the elections.

The researchers analyzed 20 million election-related tweets created between Sept. 16 and Oct. 21. They found that robots, rather than people, produced 3.8 million tweets, or 19 percent. Social bots also accounted for 400,000 of the 2.8 million individual users, or nearly 15 percent of the population under study.

"The presence of these bots can affect the dynamics of the political discussion in three tangible ways," writes in a recently released paper titled, "Social Bots Distort the 2016 U.S. Presidential Election Online Discussion," appearing the journal First Monday.

"First, influence can be redistributed across suspicious accounts that may be operated with malicious purposes. Second, the political conversation can become further polarized. Third, spreading of misinformation and unverified information can be enhanced."

"As a result, the integrity of the 2016 U.S. presidential election could be possibly endangered."

Interestingly, Trump's robot-produced tweets were almost uniformly positive, boosting the candidate popularity. By contrast only half of Clinton's were, with the other half criticizing the nominee, according to the research paper. South Carolina produced the most fake campaign-related tweets, the study reports.

Because of social bots' sophistication, it's often impossible to determine who creates them, although political parties, local, national and foreign governments and "even single individuals with adequate resources could obtain the operational capabilities and technical tools to deploy armies of social bots and affect the directions of online political conversation," the report says.

The "master puppeteers" behind influence bots, Ferrara added, often create fake Twitter and Facebook profiles. They do so by stealing online pictures, giving them fictitious names, and cloning biographical information from existing accounts. These bots have become so sophisticated that they can tweet, retweet, share content, comment on posts, "like" candidates, grow their social influence by following legitimate human accounts and even engage in human-like conversations.

Citation: Fake Tweets, real consequences for the election (2016, November 4) retrieved 28 March 2024 from https://phys.org/news/2016-11-fake-tweets-real-consequences-election.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Researchers create technology to detect bad bots in social media

63 shares

Feedback to editors