This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

Q&A: How misinformation and disinformation spread, the role of AI, and how we can guard against them

fake news
Credit: CC0 Public Domain

We are swimming in a sea of information, built on a 24/7 cycle of content produced for our endless consumption. Average Americans stare at their smartphones for three and a half hours a day—and soak it all up.

But how much of it is true? That's hard to say, but one thing is clear, there's a lot of misinformation and disinformation out there, and AI makes it easier than ever to create content out of thin air.

What are these different types of non-truthful information, and is their spread growing? And how do we combat it?

Kelly M. Greenhill, associate professor of political science, is an expert in the field. The author of "Weapons of Mass Migration: Forced Displacement, Coercion, and Foreign Policy," she is currently finishing a book exploring the influence of rumors, , propaganda, myths and other forms of extra-factual information on international politics.

Greenhill, who is also on the faculty of Tisch College of Civic Life, recently spoke with Tufts Now, explaining the varieties of what she calls extra-factual information and talking about their influence on American politics and what we as consumers of information can do to keep things straight.

What is the difference between misinformation and disinformation?

Misinformation is false or misleading information that is created or spread erroneously, while disinformation is false or misleading information that is knowingly and intentionally spread to cause harm.

In my own work, I often focus on what I refer to as extra-factual information, which includes both misinformation and disinformation, along with other forms of unverified and unverifiable information.

I believe it is important to analyze these disparate forms of information collectively for a few reasons.

One, they are all-pervasive in today's information ecosystem. Two, they are often interconnected; for instance, a misinformation-based rumor can give rise to a disinformation-driven conspiracy theory, based on unverifiable myths about certain individuals or groups in a society. Three, our brains don't process these different kinds of information differently. And four, the more we hear information, the more it feels "true" to our brains.

So if we only examine one or another kind of information in isolation, we miss a good deal about what is actually going on, both on the micro-level inside individuals' heads and on the macro-level in terms of observable outcomes across societies and even transnationally.

Has there been an appreciable increase in disinformation in American politics—and just in general? If so, where is it coming from?

There is more disinformation in politics around the globe, in no small part because we are living in an era when more politicians are unabashed about shamelessly lying and/or disseminating misleading extra-factual information.

The norms around promulgating dissembling and disingenuous information have unfortunately loosened in recent years. Many politicians who engage in this behavior are not punished.

Indeed, they are rewarded by supporters who prefer the "truthy" messaging—knowledge that "feels" true and comes from the gut rather than higher reasoning—over less palatable fact-based alternatives. Donald Trump's extraordinary behavior in this regard during the 2016 and afterwards served as a model emulated by many others.

Has social media made the spread of misinformation and disinformation more pervasive?

Like other revolutionary communications technologies before it, the internet—and the to which it gave rise—certainly makes it easier to spread all kinds of information faster and more widely than was previously the case. And the algorithms on the social media platforms are explicitly designed to give their customers more of what it appears they want to see to keep them on the sites, providing the tech companies with more valuable user data and revenue.

However, what fundamentally matters—as it always has—is the content of the message (its salience to the audience), the perceived authority of the source (the messenger), and how often the message is received (repetition). This key combination of a credible messenger, delivering a salient and seemingly plausible message, and doing so repeatedly, is not new or unique to the internet or social media era. Technology has changed, but how our brains process inputs and what we find persuasive and why, has not.

Are the fears that with the rise of AI and large language models like ChatGPT that misinformation and disinformation will increase and be harder to detect?

Yes, these fears are themselves quite pervasive. They are also sound. There are real reasons for concern. On the other hand, there are promising technologies, such as AI watermarking, being developed to help both messengers and audiences distinguish between real and AI-generated content. But it appears, at least based on what one can glean from /non-classified data, we have a long way to go on that.

At the same time, since people know that AI-generated content will be out there, they should, at least in theory, be primed to be more skeptical of content they encounter and treat less of it as self-evidently true or plausibly true. On the other hand, however, irrespective of the source—genuine, fake, or somewhere in between—if an idea or piece of information feels true to an individual when he hears it, he is less likely to interrogate or question its veracity.

How can we as consumers of information be sure of what we see and read?

Unfortunately, we cannot be certain much of the time. What we can do, however, is to ask ourselves a few questions when we encounter information. These include: Where did the information come from? Is the source credible, and why do we think so? What is the motivation of the source in sharing the information? If we immediately think the information "feels" true, what, if any, evidence would change our minds?

In other words, if we want to believe it is true, why, and if not, why not? In short, while hardly a silver bullet, being conscious of our responses to new information can help us navigate a complicated information landscape.

Daniel Dennett, professor emeritus of philosophy at Tufts, has said that he fears that trust between people as a whole will be destroyed by lifelike AI—that we won't be able to know who to trust. What is your take on that?

In much of the world, and especially in many western, liberal democracies, we have been suffering a decline-in-trust crisis, especially trust in institutions, such as governments and the media, and trust in experts and expertise, for quite some time now. This crisis has now reached critical levels in many places.

AI is neither the root of the problem nor its cause. But AI can have important, maybe even critical, exacerbatory effects, given the underlying trust crisis. "Tribal" group identity—and what one's fellow group members say about who one is to trust and what one is supposed to believe—can and often does trump facts.

AI, like many technologies, is a handmaiden rather than the master. Could that change in the future? Absolutely. But I think we have more acute and existentially important trust problems to confront and combat at present.

Provided by Tufts University

Citation: Q&A: How misinformation and disinformation spread, the role of AI, and how we can guard against them (2024, February 26) retrieved 27 April 2024 from https://phys.org/news/2024-02-qa-misinformation-disinformation-role-ai.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Trust or distrust? There is an alternative mindset for confronting disinformation, argues researcher

1 shares

Feedback to editors