Tech companies scramble to remove New Zealand shooting video

March 15, 2019 by Kelvin Chan
Tech companies scramble to remove New Zealand shooting video
This combination of images shows logos for companies from left, Twitter, YouTube and Facebook. These Internet companies and others say they're working to remove video footage filmed by a gunman in the New Zealand mosque shooting that was widely available on social media hours after the horrific attack. (AP Photos/File)

Internet companies scrambled Friday to remove graphic video filmed by a gunman in the New Zealand mosque shootings that was widely available on social media for hours after the horrific attack.

Facebook said it took down a livestream of the shootings and removed the shooter's Facebook and Instagram accounts after being alerted by police. At least 49 people were killed at two mosques in Christchurch, New Zealand's third-largest city.

Using what appeared to be a helmet-mounted camera, the gunman livestreamed in horrifying detail 17 minutes of the attack on worshippers at the Al Noor Mosque, where at least 41 people died. Several more worshippers were killed at a second mosque a short time later.

The shooter also left a 74-page manifesto that he posted on social media under the name Brenton Tarrant, identifying himself as a 28-year-old Australian and white nationalist who was out to avenge attacks in Europe perpetrated by Muslims.

"Our hearts go out to the victims, their families and the community affected by this horrendous act," Facebook New Zealand spokeswoman Mia Garlick said in a statement.

Facebook is "removing any praise or support for the crime and the shooter or shooters as soon as we're aware," she said. "We will continue working directly with New Zealand Police as their response and investigation continues."

Twitter, YouTube owner Google and Reddit also were working to remove the footage from their sites.

The furor highlights once again the speed at which graphic and disturbing content from a tragedy can spread around the world and how Silicon Valley tech giants are still grappling with how to prevent that from happening.

British tabloid newspapers such as The Daily Mail and The Sun posted screenshots and video snippets on their websites.

One journalist tweeted that several people sent her the video via the Facebook-owned WhatsApp messaging app.

New Zealand police urged people not to share the footage, and many internet users called for tech companies and news sites to take the material down.

Some people expressed outrage on Twitter that the videos were still circulating hours after the attack.

Tech companies scramble to remove New Zealand shooting video
In this frame from video that was livestreamed Friday, March 15, 2019, a gunman who used the name Brenton Tarrant on social media reaches for a gun in the back of his car before the mosque shootings in Christchurch, New Zealand. (Shooter's Video via AP)
"Google is actively inciting violence," tweeted British journalist Carole Cadwalladr with a screen grab of search results of the video.

The video's spread underscores the challenge for Facebook even after stepping up efforts to keep inappropriate and violent content off its platform. In 2017 it said it would hire 3,000 people to review videos and other posts, on top of the 4,500 people Facebook already tasks with identifying criminal and other questionable material for removal.

But that's just a drop in the bucket of what is needed to police the social media platform, said Siva Vaidhyanathan, author of "Antisocial Media: How Facebook Disconnects Us and Undermines Democracy."

If Facebook wanted to monitor every livestream to prevent disturbing content from making it out in the first place, "they would have to hire millions of people," something it's not willing to do, said Vaidhyanathan, who teaches media studies at the University of Virginia.

"We have certain companies that have built systems that have inadvertently served the cause of violent hatred around the world," Vaidhyanathan said.

Facebook and YouTube were designed to share pictures of babies, puppies and other wholesome things, he said, "but they were expanded at such a scale and built with no safeguards such that they were easy to hijack by the worst elements of humanity."

With billions of users, Facebook and YouTube are "ungovernable" at this point, said Vaidhyanathan, who called Facebook's livestreaming service a "profoundly stupid idea."

In footage that at times resembled scenes from a first-person shooter video game, the mosque shooter was seen spraying terrified worshippers with bullets, sometimes re-firing at people he had already cut down.

He then walked outside, shooting at people on a sidewalk. Children's screams could be heard in the distance as he strode to his car to get another rifle, then returned to the mosque, where at least two dozen people could be seen lying in pools of blood.

He walked back outside, shot a woman, got back in his car, and drove away.

The livestream video was reminiscent of violent first-person shooter video games such as "Counter-Strike" or "Doom" as the gunman went around corners and calmly entered rooms firing at helpless victims. Many shooting games allow players to toggle between close-range and long-range weapons, and the gunman switched from a shotgun to a rifle during the video, reloading as he moved around.

At one point, the shooter even paused to give a shout-out to one of YouTube's top personalities, known as PewDiePie, with tens of millions of followers, who has made jokes criticized as anti-Semitic and posted Nazi imagery in his videos.

Tech companies scramble to remove New Zealand shooting video
Flower rest at a road block, as a Police officer stands guard near the Linwood mosque, site of one of the mass shootings at two mosques in Christchurch, New Zealand, Saturday, March 16, 2019. (AP Photo/Mark Baker)
"Remember, lads, subscribe to PewDiePie," the gunman said.

The seemingly incongruous reference to the Swedish vlogger known for his video game commentaries as well as his racist references was instantly recognizable to many of his 86 million followers.

The YouTube sensation has been engaged in an online battle over which channel is the most subscribed to, and his followers have taken to posting messages encouraging others to "subscribe to PewDiePie."

PewDiePie, whose real name is Felix Kjellberg, said on Twitter he felt "absolutely sickened" that the alleged gunman referred to him during the livestream. "My heart and thoughts go out to the victims, families and everyone affected," he said.

The hours it took to take the violent video and manifesto down are "another major black eye" for social media platforms, said Dan Ives, managing director of Wedbush Securities.

The rampage's broadcast "highlights the urgent need for media platforms such as Facebook and Twitter to use more artificial intelligence as well as security teams to spot these events before it's too late," Ives said.

Hours after the shooting, Reddit took down two subreddits known for sharing video and pictures of people being killed or injured —R/WatchPeopleDie and R/Gore—apparently because users were sharing the mosque attack video.

"We are very clear in our site terms of service that posting content that incites or glorifies violence will get users and communities banned from Reddit," it said in a statement. "Subreddits that fail to adhere to those site-wide rules will be banned."

Videos and posts that glorify violence are against Facebook's rules, but Facebook has drawn criticism for responding slowly to such items, including video of a slaying in Cleveland and a live-streamed killing of a baby in Thailand. The latter was up for 24 hours before it was removed.

In most cases, such material gets reviewed for possible removal only if users complain. News reports and posts that condemn violence are allowed. This makes for a tricky balancing act for the company. Facebook says it does not want to act as a censor, as videos of violence, such as those documenting police brutality or the horrors of war, can serve an important purpose.

Explore further: How tech companies are successfully disrupting terrorist social media activity

Related Stories

Recommended for you

Physicists discover new class of pentaquarks

March 26, 2019

Tomasz Skwarnicki, professor of physics in the College of Arts and Sciences at Syracuse University, has uncovered new information about a class of particles called pentaquarks. His findings could lead to a new understanding ...

Study finds people who feed birds impact conservation

March 26, 2019

People in many parts of the world feed birds in their backyards, often due to a desire to help wildlife or to connect with nature. In the United States alone, over 57 million households in the feed backyard birds, spending ...

Matter waves and quantum splinters

March 25, 2019

Physicists in the United States, Austria and Brazil have shown that shaking ultracold Bose-Einstein condensates (BECs) can cause them to either divide into uniform segments or shatter into unpredictable splinters, depending ...

4 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

TheGhostofOtto1923
1 / 5 (1) Mar 15, 2019
This is trumps fault. I just know it.

"She said platforms like YouTube have the ability to find and remove violent videos with keyword searches, but more people are needed to monitor the platforms.

""They have the tools with social listening to go in with keyword terms and have moderators view and remove all videos linked to this type of incident," she said"

-Hmmm lets see now...

"YouTube receives roughly 300,000 individual video uploads each day, amounting to 80k hours of video and 24TB of data."

-So we are looking at a Ministry of Mods of perhaps 100k people?

Not to mention all the other social platforms, news outlets (for instance the trayvon martin propaganda has caused similar such violence and dysphoria), print media... we are talking about a pentagon-sized effort. And since progs do not trust private enterprise to police or regulate itself, this will eventually be an actual pentagon-type agency.

Which is alright as long as it is an AI and not human-based.
V4Vendicar
not rated yet Mar 17, 2019
"This is trumps fault. I just know it."

You claim to know many things Otto. Most of it, as we have seen, is wrong.

V4Vendicar
not rated yet Mar 17, 2019
Facebook is a conduit for personal and public communication.

What we are learning is that un-moderated public communication - communication without monitoring or gate keepers can be very corrosive to society as it allows the formation and magnification of the messages of kooks and criminals.

I don't blame Facebook for facilitating communication any more than I blame the creators of the WWW. for doing the same.


Facebook has another characteristic that is worrisome, and this is entirely distinct from the first. It is it's data acquisition on it's usership and others.


cont
V4Vendicar
not rated yet Mar 17, 2019
This is not any way unique to Facebook. Google does it. Microsoft does it. Credit card and debit card companies do it. Governments do it, and even you local grocery store has been doing it for decades.

It is this data acquisition that is paying the bills to keep these internet services running, and without that revenue model the services would not be available at all.

This is a problem caused by Capitalism, not facebook.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.