A Facebook representative plans to tell a Senate committee on Tuesday that Russia-linked content reached an estimated 126 million people during the 2016 campaign season and into 2017.
In prepared testimony that was submitted to the Senate Judiciary Committee, the Facebook representative said that Russian operatives were responsible for 80,000 posts from 2015 to 2017, a source told Variety.
Facebook has previously said that Russia-linked firms purchased about 3,000 ads on its platform during that time period, in spots that focused on divisive political issues. The latest disclosure is a signal that the potential influence of Russian messaging was more widespread than initially revealed to lawmakers and their staffs.
A Senate Judiciary subcommittee is holding a hearing on Tuesday called “Extremist Content and Russian Disinformation Online: Working with Tech to Find Solutions.” Facebook general counsel Colin Stretch is scheduled to testify, along with representatives from Twitter and Google.
The three companies are also sending representatives to hearings planned on Wednesday by the Senate and House intelligence committees. They are each expected to outline steps they have taken to try to curb suspicious activity and fake accounts.
Meanwhile, Variety obtained excerpts from planned testimony by Twitter’s representative to the hearing, its acting general counsel Sean Edgett. He said that the company detected examples of automated activity and “deliberate misinformation” in the lead-up to the 2016 election.
In the testimony, Twitter said that it has detected 2,752 accounts “that we now believe are associated with the Internet Research Agency.” That is the Russian company that has waged influence campaigns on behalf of the Russian government.
“All 2,752 accounts have been suspended, we are proactively giving committee investigators the handles of these accounts, and we have taken steps to block future registrations related to these accounts — and we have pledged to Congress that we will inform them as we uncover more related accounts,” Twitter said in its testimony.
Twitter had earlier told lawmakers and staffers that it identified 201 accounts connected to Russian-influence activity.
Overall, Twitter said that the tweets linked to Russian automated accounts were just a small fraction of overall election-related activity. It reviewed the time period of Sept. 1 to Nov. 15, 2016, and found that the tweets from Russia-linked automated accounts were less that 3/4 of a percent of the overall election-related tweets at the time. Its review found 189 million election-related tweets during that period.
“State-sanctioned manipulation of elections by sophisticated foreign actors is a new challenge for us — and one that we are determined to meet,” Twitter said in its testimony. “Twitter has long confronted problems of spam and automation, including how they can be used to amplify messages using novel and adaptive strategies. So we know we cannot defeat this evolving, shared threat alone. Working with the broader community, we will continue to test, to learn, to share, and to improve so that our product remains effective and safe.”
Twitter also said that its review found that engagement between automated and Russia-linked accounts and @Wikileaks, @DCLeaks_, and @GUCCIFER_2 was “substantially higher” than the average level of automated engagement, including with other high-profile accounts.
Twitter announced that it would no longer run ads from RT and Sputnik, the two Russian news entities. It also announced changes to its advertising policies. After the election, it launched an initiative to improve detection of automated accounts and spam. The company said that it now blocks about 450,000 suspicious logins each day that it believes are generated through automation.
Google, meanwhile, published a blog post on Monday outlining the results of its investigation into Russian influence. It found that two accounts linked to the Internet Research Agency spent a total of $4,700 during the 2016 election cycle. The ads were not targeted to specific groups of users, Google said.
On YouTube, it found 18 channels “likely” associated with Russian efforts related to the 2016 election. The channels featured 1,108 videos representing 43 hours of content and totaling 309,000 U.S. views from June 2015 to November 2016. But the videos had low view counts — only around 3% had more than 5,000 views, Google said.
Google also said that it found “no evidence” that state-linked or state-funded actors used improper methods to try to boost their rankings in search results.
“While we have found only limited activity on our services, we will continue to work to prevent all of it, because there is no amount of interference that is acceptable,” wrote Kent Walker, Google’s general counsel, and Richard Salgado, director of law enforcement and information security.