Deepfake videos could destroy trust in society—here's how to restore it

Deepfake videos could destroy trust in society - here’s how to restore it
Credit: Andriano.cz/Shutterstock

It has the potential to ruin relationships, reputations and our online reality. "Deepfake" artificial intelligence technology promises to create doctored videos so realistic that they're almost impossible to tell from the real thing. So far it has mostly been used to create altered pornographic clips featuring celebrity women's faces but once the techniques are perfected, deepfake revenge porn purporting to show people cheating on their partners won't be far behind.

But more than becoming a nasty tool for stalkers and harassers, deepfakes threaten to undermine trust in political institutions and society as a whole. The White House recently justified temporarily banning a reporter from its press conferences using reportedly sped up genuine footage of an incident involving the journalist. Imagine the implications of seeing ultra-realistic but artificial footage of government leaders planning assassinations, CEOs colluding with foreign agents or a renowned philanthropist abusing children.

So-called has already increased many people's scepticism towards politicians, journalists and other public figures. It is becoming so easy to create entirely fictional scenarios that we can no longer trust any at face value. This threatens our political, legal and media systems, not to mention our personal relationships. We will need to create new forms of consensus on which to base our social reality. New ways of checking and distributing power—some political, some technological—could help us achieve this.

Fake scandals, fake politicians

Deepfakes are scary because they allow anyone's image to be co-opted, and call into question our ability to trust what we see. One obvious use of deepfakes would be to falsely implicate people in scandals. Even if the incriminating footage is subsequently proven to be fake, the damage to the victim's reputation may be impossible to repair. And politicians could tweak old footage of themselves to make it appear as if they had always supported something that had recently become popular, updating their positions in real time.

There could even be public figures who are entirely imaginary, original but not authentic. Meanwhile, video footage could become useless as evidence in court. Broadcast news could be reduced to people debating whether clips were authentic or not, using ever more complex AI to try to detect deepfakes.

But the arms race that already exists between fake content creators and those detecting or debunking disinformation (such as Facebook's planned fake news "war room") hides a deeper issue. The mere existence of deepfakes undermines confidence and trust, just as the possibility that an election was hacked brings the validity of the result into question.

While some people may be taken in by deepfakes, that is not the real problem. What is at stake is the underlying social structure in which we all agree that some form of truth exists, and the social realities that are based on this trust. It is not a matter of the end of truth, but the end of the belief in truth – a post-trust society. In the wake of massive disinformation, even honest public figures will be easily ignored or discredited. The traditional organisations that have supported and enabled consensus – government, the press – will no longer be fit for purpose.

Blockchain trust

New laws to regulate the use of deepfakes will be important for people who have damaging videos made of them. But policy and law alone will not save our systems of governance. We will need to develop new forms of consensus, new ways to agree on social situations based on alternative forms of trust.

One approach will be to decentralise trust, so that we no longer need a few institutions to guarantee whether information is genuine and can instead rely on multiple people or organisations with good reputations. One way to do this could be to use blockchain, the technology that powers Bitcoin and other cryptocurrencies.

Blockchain works by creating a public ledger stored on multiple computers around the world at once and made tamper-proof by cryptography. Its algorithms enable the computers to agree on the validity of any changes to the ledger, making it much harder to record false information. In this way, trust is distributed between all the computers who can scrutinise each other, increasing accountability.

More democratic society

We can also look to more democratic forms of government and journalism. For example, liquid democracy allows voters to vote directly on each issue or temporarily assign their votes to delegates in a more flexible and accountable way than handing over full control to one party for years. This would allow the public to look to experts to make decisions for them where necessary but swiftly vote out politicians who disregarded their views or acted dishonestly, increasing trust and legitimacy in the political system.

In the press, we could move towards more collaborative and democratised news reporting. Traditional journalists could use the positive aspects of social media to gather information from a more diverse range of sources. These contributors could then discuss and help scrutinise the story to build a consensus, improving the media's reputation.

The problem with any system that relies on the reputation of key individuals to build trust is how to prevent that reputation from being misused or fraudulently damaged. Checks such as Twitter's "blue tick" account verification for public figures can help, but better legal and technical protections are also needed: more protected rights to privacy, better responses to antisocial behaviour online, and better privacy-enhancing technologies built in by design.

The potential ramifications of deepfakes should act as a call to action in redesigning systems of to be more open, more decentralised and more collective. And now is the time to start thinking about a different future for society.


Explore further

Misinformation woes could multiply with 'deepfake' videos

Provided by The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: Deepfake videos could destroy trust in society—here's how to restore it (2019, February 6) retrieved 24 April 2019 from https://phys.org/news/2019-02-deepfake-videos-societyhere.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
31 shares

Feedback to editors

User comments

Feb 06, 2019
Blockchain has its own problems. PASSWORDS! 02/05/2019 news about passwords being taken to his blockchain and those people lost their bitcoins.

https://futurism....d-wallet

"The grim saga demonstrates that even though blockchain technology can help promote accountability and transparency, it can also allow companies to operate without legal oversight — a glaring problem when it comes to financial matters."

Feb 06, 2019
I suspect there are technical means to analyze a video or photo to determine if it has been modified. I know there are some versions of videos that are in RAW format that cannot be doctored, or so they say.

Feb 07, 2019
I suspect there are technical means to analyze a video or photo to determine if it has been modified. I know there are some versions of videos that are in RAW format that cannot be doctored, or so they say.


Any ordinary form of analysis can be defeated. The analysis ultimately produces some sort of score that predicts whether the video has been doctored; the AI can be trained to reduce that score to an ordinary level. Ultimately, there will need to be some absolute method of identification. For example, each camera could produce a random pattern in the image, which changes on every frame and thus appears to be just noise. Without access to the camera (or whatever seed number / algorithm it uses for the random pattern), it would be impossible to discover and defeat the pattern.

Feb 07, 2019
I suspect there are technical means to analyze a video or photo to determine if it has been modified.

'Technical means' is just a fancy word for some measurable feature. The whole idea of these antagonistic neural networks - which are used to create the deep-fake videos - is to change the video in such a way that all such features pass inspection (where 'inspection' is done either through an algorithm or another neural network trained on deep-fake videos)

With two neural networks working against each - one checking for fakes and the other trying to defeat the check - it's always the one that is trained last which wins (which in this case is always the NN that created the deep-fake).

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more