Social networks have become a dominant force in society. Family, friends, peers, community leaders and media communicators are all part of people's social networks. Individuals within a network may have different opinions on important issues, but it's their collective actions that determine the path society takes.
To understand the process through which we operate as a group, and to explain why we do what we do, researchers have developed a novel computational model and the corresponding conditions for reaching consensus in a wide range of situations. The findings are published in the August 2014 issue on Signal Processing for Social Networks of the IEEE Journal of Selected Topics in Signal Processing.
"We wanted to provide a new method for studying the exchange of opinions and evidence in networks," said Kamal Premaratne, professor of electrical and computer engineering, at the University of Miami (UM) and principal investigator of the study. "The new model helps us understand the collective behavior of adaptive agents—people, sensors, data bases or abstract entities—by analyzing communication patterns that are characteristic of social networks."
The model addresses some fundamental questions: what is a good way to model opinions and how these opinions are updated, and when is consensus reached.
One key feature of the new model is its capacity to handle the uncertainties associated with soft data (such as opinions of people) in combination with hard data (facts and numbers).
"Human-generated opinions are more nuanced than physical data and require rich models to capture them," said Manohar N. Murthi, associate professor of electrical and computer engineering at UM and co-author of the study. "Our study takes into account the difficulties associated with the unstructured nature of the network," he adds. "By using a new 'belief updating mechanism,' our work establishes the conditions under which agents can reach a consensus, even in the presence of these difficulties."
The agents exchange and revise their beliefs through their interaction with other agents. The interaction is usually local, in the sense that only neighboring agents in the network exchange information, for the purpose of updating one's belief or opinion. The goal is for the group of agents in a network to arrive at a consensus that is somehow 'similar' to the ground truth - what has been confirmed by the gathering of objective data.
In previous works, consensus achieved by the agents was completely dependent on how agents update their beliefs. In other words, depending on the updating scheme being utilized, one can get different consensus states. The consensus in the current model is more rational or meaningful.
"In our work, the consensus is consistent with a reliable estimate of the ground truth, if it is available," Premaratne said. "This consistency is very important, because it allows us to estimate how credible each agent is."
According to the model, if the consensus opinion is closer to an agent's opinion, then one can say that this agent is more credible. On the other hand, if the consensus opinion is very different from an agent's opinion, then it can be inferred that this agent is less credible.
"The fact that the same strategy can be used even in the absence of a ground truth is of immense importance because, in practice, we often have to determine if an agent is credible or not when we don't have knowledge of the ground truth," Murthi said.
In the future, the researchers would like to expand their model to include the formation of opinion clusters, where each cluster of agents share similar opinions. Clustering can be seen in the emergence of extremism, minority opinion spreading, the appearance of political affiliations, or affinity for a particular product, for example.
Explore further: Minority rules: Scientists discover tipping point for the spread of ideas
The title of the study is "Convergence Analysis of Iterated Belief Revision in Complex Fusion Environments."