YouTube has created an invitation-only program giving about 200 individuals and groups special status in identifying material suspected of violating its community guidelines — but it’s aimed at removing content such as hate speech and pornography, not copyright-protected videos.
The so-called “super flagger” program came to light last week, in a Financial Times article that said the U.K. government’s anti-terrorism unit has the ability to alert YouTube to multiple videos suspected of containing “extremist material.”
The Google-owned Internet video service quietly introduced the super-flagger program at the end of 2012. Currently about 200 participants have the ability to identify up to 20 videos for review at once, according to a source familiar with the program, confirming a Wall Street Journal report. Of those, fewer than a dozen are government agencies or private organizations; most are individuals who have a reliable track record of ID’ing inappropriate content, the source said.
Asked for comment, a Google rep emphasized that the company itself determines which videos violate its terms of service. The “super flaggers” do not have the ability to remove videos directly. The program is intended to monitor content to make sure it conforms with YouTube’s Community Guidelines, which prohibit pornography, drug abuse and harassment.
“Community flagging is a tool available to anyone on YouTube, allowing us to maintain a safe and vibrant platform with the help of our users,” Google said in a statement. “To make the process more efficient, we invite a small set of users who flag content that violates our Community Guidelines regularly and accurately to access more advanced flagging tools.”
Google added, “Any suggestion that a government or any other group can use these flagging tools to remove YouTube content themselves is wrong.”
The company provides information on what it calls the YouTube Trusted Flagger Program in a document on its website.
YouTube’s antipiracy efforts, meanwhile, are run under a completely separate program called Content ID, which automatically identifies material suspected of infringing copyrights.
According to YouTube, the Content ID system scans the equivalent of 400 years of video every day. That is matched against a database with 25 million reference files of copyrighted content supplied by content owners. About 5,000 content partners use the service, including TV broadcasters, movie studios and record labels, the company has said.
Starting late last year, YouTube expanded Content ID to cover videos uploaded by affiliates of multichannel networks such as Maker Studios and Machinima.