Facebook, after getting raked over the coals for the role it played in spreading disinformation and propaganda during the 2016 U.S. presidential election, outlined several new steps it’s taking ahead of the 2020 election season.

Among its new measures: Facebook has banned paid ads that suggest voting is useless or that urge people not to vote. The company also is launching a new U.S. presidential candidate ad-spending tracker and is adding more advertising-spend details at the state and regional level to help users analyze advertiser and candidate efforts to reach voters geographically.

Facebook also said it will more prominently label content on Facebook and Instagram that has been rated false (or partly false) by a third-party fact-checker.

However, the company has said it will not ban political ads that include false claims — which CEO Mark Zuckerberg, in a widely criticized speech last week, couched in the context of the company’s commitment to protecting free speech. “This isn’t about money,” he said on a call with reporters Monday. “Banning political ads would favor incumbents… I don’t think that’s what we want to do.”

Facebook has been blasted by critics including former VP Joe Biden and Sen. Elizabeth Warren for running paid ads (including one from Trump’s 2020 reelection campaign) that falsely state that Biden coerced Ukrainian officials into halting a corruption probe into his son, Hunter. In response, Warren bought ads on Facebook falsely claiming Zuckerberg and his company had endorsed Trump, in order to highlight why she believes the policy was problematic.

Among other election-protection initiatives it’s touting ahead of next year’s U.S. elections, Facebook announced an updated policy concerning taking action against “coordinated inauthentic behavior” by rogue actors, including foreign governments. The changes will improve Facebook’s ability “to counter new tactics and bad actors,” the company said.

Facebook claims that in the last year, it has disrupted more than 50 individual campaigns from multiple nation-states trying to interfere in elections. Since 2016, it said, it has blocked some 200 such attempts. Most of the attempts have originated from Russia, while Facebook is also seeing increasingly sophisticated attacks from Iran and China, Zuckerberg said on the call Monday.

“The existence of this activity shows actors are continuing to try to influence elections… in the U.S. and around the world,” Zuckerberg said. On Monday, Facebook announced it removed four separate networks comprising dozens of accounts, pages and groups attempting to spread misinformation in the U.S., North Africa and Latin America.

Three of those originated in Iran and one came out of Russia, the company said. The Russian network in particular, which include 50 Instagram accounts and one on Facebook, was aimed at U.S. users and “showed some links to the Internet Research Agency (IRA)” — the Kremlin-backed entity tied to the large-scale 2016 misinformation campaign on Facebook — and “had the hallmarks of a well-resourced operation that took consistent operational security steps to conceal their identity and location,” according to Facebook.

In addition, Facebook announced that it will invest $2 million — although that’s a minuscule fraction of its total revenue — to support media-literacy projects both on Facebook and elsewhere.

Starting next month, the company said, it will show the confirmed owners of a Facebook Page and will label state-controlled media on their respective pages and in its ad library, and will make it clear if an ad ran on Facebook, Instagram, Messenger or the company’s Audience Network for third-party services.

The company also is launching Facebook Protect, a program aimed to provide better security for the accounts of elected officials, candidates, their staffs and others “who may be particularly vulnerable to targeting by hackers and foreign adversaries.” For pages enrolled in Facebook Protect, participants will be required to turn on two-factor authentication and their accounts will be monitored for hacking.

Facebook’s latest attempts to reduce election interference come after Zuckerberg last year was hauled before Congress to explain how his company’s platform was used by Russian-funded trolls to try to influence the 2016 U.S. election. According to the social-media giant, Russia-linked content reached an estimated 126 million people during the 2016 campaign season and into 2017.

Per a New York Times report, Facebook had information about Russia’s efforts to spread propaganda across the platform as early as the spring of 2016 — an allegation Facebook has denied.

In addition, data on millions of Facebook users improperly ended up in the possession of Cambridge Analytica, a now-defunct political consulting firm that used the information to target voters during the 2016 presidential election on behalf of Donald Trump’s campaign.

As part of Facebook’s PR initiative about its preparations for 2020 elections, the company granted an interview with Zuckerberg with “NBC Nightly News” anchor Lester Holt, portions of which will air Monday on the program.

Meanwhile, as first reported by Bloomberg, Zuckerberg and his wife, Priscilla Chan, recently sent emails to the campaign manager of Democratic presidential hopeful Pete Buttigieg recommending the hiring of specific individuals. On the call Monday with reporters, Zuckerberg downplayed the recommendations, saying, “This should probably not be misconstrued as if I’m, like, deeply involved in trying to support their campaign or something like that.”

This Wednesday (Oct. 23), Zuckerberg is slated to testify before the U.S. House Committee on Financial Services about Facebook’s Libra crypto-currency plans.