Facebook will ban users who have violated certain rules from live streaming after just one offense, the company announced this week. The new one-strike policy includes posting content in support of hate groups and terrorist organizations.
The policy change comes in response to the Christchurch shooting in New Zealand, which was live-streamed on Facebook by its perpetrator. Facebook announced the changes just as the company joined efforts with Google, Twitter, Amazon and Microsoft to jointly fight extremist and terrorist content.
“The terrorist attacks in Christchurch, New Zealand, were a horrifying tragedy,” the five companies declared in a joint statement. “It is right that we come together — resolute in our commitment — to ensure we’re doing all we can to fight the hatred and extremism that lead to terrorist violence.”
“We will now apply a ‘one strike’ policy to Live in connection with a broader range of offenses,” said Facebook vice president Guy Rosen in a blog post describing his company’s new policy. “From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time – for example 30 days – starting on their first offense.”
Rosen cited users linking to a post of a terrorist organization without any additional context as one example for something that would get them banned from live streaming. He added that Facebook had plans to extend these restrictions to other areas as well, including the ability to create ads on the platform.
Key to Facebook’s policy change around live streaming are its rules against what the company calls “dangerous individuals and organizations.” Under that policy, Facebook not only bans terrorist and hate groups as well as human traffickers, serial murders and organized crime, but also open support for any of those groups. From the policy:
“We do not allow symbols that represent any of the above organizations or individuals to be shared on our platform without context that condemns or neutrally discusses the content. We do not allow content that praises any of the above organizations or individuals or any acts committed by them. We do not allow coordination of support for any of the above organizations or individuals or any acts committed by them.”
Previously, users who posted a link to content from any of those groups, or expressed support for them, may have gotten a simple slap on the wrist, and the company may have removed the posting in question. Under its new policies, Facebook plans to immediately restrict users who violate these policies from live streaming.
In addition to these new restrictions, Facebook also pledged to invest $7.5 million in research to better detect manipulated images and videos. Like the policy change, the investment is a direct response to the Christchurch shooting, which was live streamed on Facebook, and then re-uploaded by sympathizers of the perpetrator who gamed Facebook’s content filters by altering the source video.
On Wednesday, Facebook, Twitter, Microsoft and Amazon also jointly announced a new commitment to fight terrorist and extremist content. The joint statement specifically mentions live streaming as one of the areas of policy changes, with a commitment to “identifying appropriate checks on live-streaming, aimed at reducing the risk of disseminating terrorist and violent extremist content online.” Each platform may take different steps to implement those checks, but it’s likely that Twitter and Amazon/Twitch will follow up with similar rule changes soon.