Neutral "drifter" bots, in yellow, and a sample of their friends and followers colored according to political alignment. Large nodes are accounts sharing a lot of low-credibility links. Credit: Indiana University

In this era of political polarization, many accuse online social media platforms such as Twitter of liberal bias, intentionally favoring and amplifying liberal content and users while suppressing other political content.

But a new Indiana University study finds this is not the case. Political biases, the researchers found, stem from the social interactions of our accounts—we receive content closely aligned with whatever our online friends produce, especially our very first online friends. Also, political biases on Twitter favor conservative content.

"Our main finding is that the information Twitter users see in their depends on the political leaning of their earliest connections," said study co-author Filippo Menczer. "We found no evidence of intentional interference by the platform. Instead, can be explained by the use, and abuse, of the platform by its users."

The study, "Neutral bots probe political bias on social media," is published online in the journal Nature Communications. The authors are a team of researchers from the Observatory on Social Media (OSoMe, pronounced awesome) at IU Bloomington, led by Menczer, who is director of OSoMe and a Luddy Distinguished Professor of informatics and computer science at the Luddy School of Informatics, Computing, and Engineering.

To uncover biases in online news and information to which people are exposed on Twitter, the researchers deployed 15 bots, called "drifters" to distinguish their neutral behavior from other types of social bots on Twitter. The drifters mimicked human users but were controlled by algorithms that activated them randomly to perform actions.

After initializing each bot with one first friend from a popular news source aligned with the left, center-left, center, center-right, or right of the U.S. political spectrum, the researchers let the drifters loose "in the wild" on Twitter.

The research team collected data on the drifters daily. After five months, they examined the content consumed and generated by the drifters, analyzing the political alignment of the bots' friends and followers and their exposure to information from low-credibility news and information sources.

The research revealed that the political alignment of an initial friend on social media has a major impact on the structure of a user's social network and their exposure to low-credibility sources.

"Early choices about which sources to follow impact the experiences of social media users," Menczer said.

The study found that drifters tended to be drawn to the political right. Drifters with right-wing initial friends were gradually embedded into homogeneous networks where they were exposed to more right-leaning and low-credibility content. They even started to spread right-leaning content themselves. They also tended to follow more automated accounts.

Because the drifters were designed to be neutral, the partisan nature of the content they consumed and produced reflects biases in the "online information ecosystem" created by user interactions, according to Menczer.

"Online influence is affected by the echo-chamber characteristics of the social network," he says. "Drifters following more partisan news sources received more politically aligned followers, becoming embedded in denser echo chambers."

To avoid getting stuck in online echo chambers, users must make extra efforts to moderate the content they consume and the social ties they form, according to Diogo Pacheco, a former postdoctoral fellow at the Center for Complex Networks and Systems Research at IU Bloomington and co-author of the study.

"We hope this study increases awareness among users about the implicit biases of their online connections and their vulnerabilities to being exposed to selective information, or worse, such as influence campaigns, manipulation, misinformation, and polarization," said Pacheco. "How to design mechanisms capable of mitigating biases in online ecosystems is a key question that remains open for debate."

More information: Wen Chen et al, Neutral bots probe political bias on social media, Nature Communications (2021). DOI: 10.1038/s41467-021-25738-6

Journal information: Nature Communications

Provided by Indiana University