Big tech platforms have taken the fight against coronavirus misinformation seriously since the COVID-19 outbreak started in the U.S. earlier this year, but they’ve succeeded to markedly varying degrees.
That’s according to a new study released Tuesday by Oxford University’s Reuters Institute, which analyzed a 225 pieces of misinformation published between January and the end of March 2020 and found that 59% of Twitter posts in the sample rated as false by its fact-checkers remained online, while these figures were much lower for YouTube (27%) and Facebook (24%).
To be fair, the blinding rate that user-generated content is uploaded to big tech platforms at makes completely eradicating unsavory content from places like YouTube, Twitter, and Facebook at present appear a sisyphean task.
And the large tech companies have been vocal about their commitment to curbing fake news surrounding coronavirus, which has been evidenced in ways such as Google’s move to surface information from mainstream news sources relating to COVID-19, Twitter’s deletion of coronavirus-related posts from prominent politicians deemed harmful, and WhatsApp’s decision to place a limit on forwarding of messages.
But big tech’s campaign against coronavirus misinformation doesn’t change the fact that damage has still been done, which is particularly troubling for companies like YouTube-Google parent Alphabet, Twitter, and Facebook, which found themselves dead center in the government regulation spotlight in 2019, in part due to fake news proliferating on their platforms in the past.
While Alphabet, Twitter, and Facebook can fairly argue that the massive user bases that tap their platforms can make content moderation at times appear almost as a huge game of whack-a-mole, big tech opponents will still be armed with damning instances of harmful takes on the coronavirus spreading via big tech to point to when the pandemic subsides.
For example, there were over 15,000 tweets containing the link to a Medium post that downplayed the severity of coronavirus before Medium took down the post, according to The New York Times. This may not seem like that much engagement when examined in context of Twitter’s daily user base of over 150 million, but in a public health crisis a small amount of fake news spreading can still have serious consequences.
Moreover, videos of individuals promoting unproven treatments and bogus vaccines for COVID-19 have surfaced on YouTube. And Facebook groups have been used to spread conspiracy theories linking 5G technology and the spread of coronavirus, which have culminated in petrol bomb attacks on phone masts.
It’s no wonder why the World Health Organization in February warned that “a massive ‘infodemic'” was accompanying the coronavirus outbreak. These examples also help explain why a mid-March survey conducted by Pew Research Center found that those who said social media was the most common way they got their news performed poorly when it came to answering a question about the potential availability of a COVID-19 vaccine, when compared to the results of those whose most common news source were things like cable TV or radio.
These types of examples seem only likely to amplify the cries to more tightly regulate big tech that were so prominent in 2019.
It’s possible that these cries contribute to harsh fines against big tech companies, which have already been threatened with potential fines (albeit tiny in size when examined relative to the revenue big tech generates) overseas for failure to adequately moderate certain content.
But more likely in late 2020 we’ll see the missteps in big tech’s fight against coronavirus fake news act as further justification for some within U.S. government to attempt to place a tighter grip on companies like Facebook, Twitter, and Google parent Alphabet (potentially by fighting for things like heightened disclosure on progress eradicating false and misleading content), which are already all too familiar at this point with ‘techlash’ catching up with them.