Credit: Ecole Polytechnique Federale de Lausanne

Algorithms are used to personalize our newsfeed on social media. But the risk is that the points of view we are presented with become increasingly limited and extreme. EPFL researchers have developed a solution that would make users' personalized content more balanced, and their project has already generated interest among human rights campaigners.

When you click on one link rather than another, your choice will influence the content you will be shown by various websites further down the line. The algorithms used by like Facebook learn what our preferences are and provide more and more content that matches our interests. The risk is that we will never be shown anything that goes against our opinions, and this can distort our view of the world. "By ever more carefully selecting what we see, these algorithms are distorting reality. Social media platforms effectively become echo chambers in which opinions can become increasingly extreme," explains Elisa Celis, senior researcher in the School of Computer and Communication Sciences (IC) at EPFL.

And this can have an impact on the reader. "Numerous studies have shown that if you are undecided about something, your decision will ultimately be influenced by the frequency and order in which you are presented with information. So these algorithms can actually shape your opinion based on biased data," says Celis. In response to this problem, Celis worked with Nisheeth Vishnoi, professor in the School of Computer and Communication Sciences (IC) at EPFL, to develop a system to prevent users from being fed totally one-sided content.

An algorithm that's just as effective

They designed an algorithm that can be altered to ensure that users are shown a minimum amount of diverse content. "A platform could, for instance, opt to have views that oppose those of the user make up at least 10 percent of the newsfeed to ensure the user's view of the world remains more balanced," explain the researchers. The algorithm could be easily integrated into current systems. The main challenge is getting the large corporations on board. "For platforms like Facebook, these algorithms have to be effective in order to generate advertising revenue. We wanted to show that it is possible to create an that is just as effective but that allows content to be customized in a fairer and more balanced manner," explains Vishnoi.

Raising governments' awareness of this issue will be a key factor when it comes to filling the legislative gap in this area. Several organizations have already shown interest in the researchers' project, which they recently presented to delegates of human rights agencies in Geneva, including to members of the United Nations Office of the High Commissioner for Human Rights. "These algorithms are currently totally unregulated because the impact of the bias they generate is not yet properly understood. As a citizen, I feel powerless because I have no control over the I see. The present state of affairs could turn out to be quite dangerous for democracy. We really need to look for alternative solutions," adds Vishnoi.