Internet platforms played a central role in the mass shootings at New Zealand mosques Friday — which left at least 49 dead — and immediately prompted renewed calls for Facebook, YouTube and Twitter to take much stronger steps to combat the spread of violent hate speech.
One of the attackers live-streamed the attack on Facebook Live in a 17-minute video, which showed himself entering the Al Noor Mosque in Christchurch, New Zealand, and shooting multiple people. Prior to the massacre, the individual allegedly had posted a 74-page anti-Muslim manifesto decrying “white genocide” on Twitter and discussion site 8chan, a notorious haven for hate speech. In a forum on 8chan, someone on Friday at 1:30 p.m. New Zealand time posted the message: “I will carry out and attack against the invaders, and will even livestream the attack via Facebook,” Reuters reported.
It’s not clear when Facebook removed the video or shut down the accounts in question. The company issued a statement from Facebook spokeswoman Mia Garlick, who said in part: “Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter’s Facebook and Instagram accounts and the video. We’re also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware.”
Twitter also disabled the profile of the alleged attacker, and YouTube said it was “working vigilantly to remove any footage” related to the Christchurch attacks. Still, some of the content posted by the alleged shooter continued to be available for hours afterward, as people cropped the video or posted the text of the manifesto as an image to avoid getting detected by the platforms’ automated systems, the New York Times reported.
The internet giants have repeatedly vowed to crack down on violent extremism and hate-related content. In 2017, for example, YouTube, Facebook, Twitter and Microsoft launched a cross-company effort to combat terrorism and extremist material. But as the events in New Zealand illustrate, they still are unable to stanch the viral propagation of disturbing and violent content in real time.
Police in New Zealand said four people were in custody Friday evening in connection with the mass murders, and that one suspect — reportedly a 28-year-old Australian man with extreme anti-immigrant and anti-Muslim views — was charged with murder.
To be sure, even if Facebook and others had instantly blocked the suspected attacker’s use of social media to live-stream his mass shooting and spew hateful rhetoric, that isn’t to say it would have thwarted the wholesale killings in New Zealand. But authorities are concerned that the horrific content could inspire copycat crimes. Law enforcement officials have urged individuals and media organizations to not share the video or other content from the terrorism suspect.
In the wake of the worst mass shooting in New Zealand’s history, politicians have called on the tech companies to invest more into curbing the spread of violent and hate speech.
Sajid Javid, Britain’s Home Secretary, who is responsible for public safety and security, said in Twitter post in response to YouTube’s statement: “You really need to do more @YouTube @Google @facebook @Twitter to stop violent extremism being promoted on your platforms. Take some ownership. Enough is enough.”
Similar criticism was leveled by Damian Collins, a member of British Parliament who has been highly critical of Facebook’s business practices of late. “It’s very distressing that the terrorist attack in New Zealand was live streamed on social media & footage was available hours later,” he said in a tweet. “There must be a serious review of how these films were shared and why more effective action wasn’t taken to remove them.”
— Stewart Clarke contributed to this report.
Pictured above: Scene outside a mosque in central Christchurch, New Zealand, after a mass shooting killed at least 49 people.