Facebook has vowed to crack down on misinformation, particularly related to the current coronavirus (COVID-19) pandemic, but its filtering systems may have overcorrected — and blocked legitimate posts.
On Tuesday, Facebook users began noticing the issue and posted examples on Twitter of the social service flagging their posts as spam when they tried to share links to articles about the coronavirus outbreak from publications including The Atlantic, USA Today, Seattle Times, BuzzFeed and Dallas Morning News.
In a statement Tuesday, Facebook VP for integrity Guy Rosen said the problem was due to a bug in the company’s spam-flagging system and that the glitch had been fixed.
“We’ve restored all the posts that were incorrectly removed, which included posts on all topics — not just those related to COVID-19,” Rosen wrote. “This was an issue with an automated system that removes links to abusive websites, but incorrectly removed a lot of other posts too.”
Facebook, like numerous other companies, has urged employees who can work from home to do so in all of its offices worldwide. In an earlier post, Alex Stamos, Facebook’s ex-chief security officer who left after a disagreement with the company, suggested it was related to Facebook increasing its reliance on automated content reviews after scaling back its human reviewers.
“It looks like an anti-spam rule at FB is going haywire,” Stamos tweeted. “Facebook sent home content moderators yesterday, who generally can’t WFH [work from home] due to privacy commitments the company has made. We might be seeing the start of the [machine learning] going nuts with less human oversight.”
In responding to Stamos, Rosen insisted that the problem stemmed from a bug in the company’s anti-spam system and was “unrelated to any changes in our content moderator workforce.”
With fewer human content reviewers, Facebook had said previously, “we’ll continue to prioritize imminent harm and increase our reliance on proactive detection in other areas to remove violating content. We don’t expect this to impact people using our platform in any noticeable way. That said, there may be some limitations to this approach and we may see some longer response times and make more mistakes as a result.”
Separately, Google’s YouTube said Monday that users may see a higher number of video takedowns — including removal of content that doesn’t actually violate YouTube’s policies — given lower staffing levels amid the COVID-19 outbreak.
On Monday, Facebook, together with Google, YouTube, LinkedIn, Microsoft, Reddit and Twitter, issued a joint statement saying they were coordinating COVID-19 response efforts. “We’re helping millions of people stay connected while also jointly combating fraud and misinformation about the virus, elevating authoritative content on our platforms, and sharing critical updates in coordination with government healthcare agencies around the world,” the companies said.