This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

Reducing distrust in social media is not straightforward, computer scientists warn

Reducing distrust in social media is not straightforward, computer scientists warn
Trust and distrust in social media coexisted in the study participants. Credit: Emmaline Nelsen

Are anti-misinformation interventions on social media working as intended? It depends, according to a new study led by William & Mary researchers and published in the Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24).

Their study surveyed over 1,700 participants in the United States, revealing that anti-misinformation features increased users' awareness of misinformation in social media; but did not make them more likely to share information on social media, or more willing to receive information from the platforms. Both trust and distrust coexisted in the participants, emerging as distinct features and not simply as opposite ends of a spectrum.

"Trust and distrust dynamics are the backbone of society," said Yixuan (Janice) Zhang, an assistant professor in the William & Mary Department of Computer Science. The study defined and measured these concepts, providing a validated survey for future use.

Zhang served as lead author alongside Yimeng (Yvonne) Wang, a W&M Ph.D. student in ; the author group also included researchers from universities in three countries, all contributing to the multidisciplinary field of human-computer interaction.

"HCI has a lot to do with equitable computing, because we are dealing with ," said Zhang. Her HCI expertise aligns with William & Mary's position in the evolution of the liberal arts and sciences, aptly expressed by the proposed school in computing, data science and physics.

The study focused on Facebook, X (formerly Twitter), YouTube and TikTok as commonly used sources of news and information, expressly targeting the period from January 2017 to January 2023 as coinciding with the rise of major misinformation campaigns.

During the period examined, these platforms had all enacted anti-misinformation strategies such as labeling , curating credible content and linking to additional sources. Examples of these interventions were shown to study participants who had recently engaged with the platforms.

Then, responders were asked to express their level of agreement with eight statements, which measured four facets of trust and four facets of distrust.

For example, statements using the trust dimension of "competence" probed users' confidence in the platforms' ability to combat misinformation; statements using the distrust dimension of "malevolence" assessed the users' belief in the platforms' purported spread of misinformation. Other facets of trust included benevolence, reliability and reliance; distrust entailed skepticism, dishonesty and fear.

Additionally, the study investigated how specific anti-misinformation interventions related to users' trust and distrust in and how their experience with those features influenced their attitudes and behaviors.

An analysis of the results highlighted a cluster of respondents with high trust and high distrust, potentially indicating that users were discerning in the specific aspects of the platforms they endorsed. Also, this phenomenon suggested a discrepancy between the participants' perception of a given and their interaction experiences. This means that users, for example, may trust other users to share reliable information while being skeptical of the platform's ability to address misinformation.

The researchers also observed that and distrust perceptions varied across platforms and were influenced by demographic factors. These findings, they argued, may be useful to policymakers and regulators in tailoring interventions to users' specific cultures and contexts.

As a HCI researcher, Zhang believes in human-centered computing and in the collaboration between diverse disciplines. In addition to designing and implementing computational technologies, during her Ph.D. program she got acquainted with educational and social science theories.

Wang's interests, too, lie in the interaction between humans and computers. She is now investigating the use of technology in addressing mental health concerns and building trustworthy platforms for users to enhance their mental well-being.

"As we focus on human beings, we really want to know if our work can help them," she said.

More information: Yixuan Zhang et al, Profiling the Dynamics of Trust & Distrust in Social Media: A Survey Study, Proceedings of the CHI Conference on Human Factors in Computing Systems (2024). DOI: 10.1145/3613904.3642927

Provided by William & Mary

Citation: Reducing distrust in social media is not straightforward, computer scientists warn (2024, May 14) retrieved 26 May 2024 from https://phys.org/news/2024-05-distrust-social-media-straightforward-scientists.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Social media 'trust' or 'distrust' buttons could reduce spread of misinformation

30 shares

Feedback to editors