Facebook Will Not Restrict Political Lies Ahead of Israel's March Election

In a Tel Aviv meeting with local journalists, Facebook executives said the company considers fact-checking political statements to be a form of censorship

Omer Kabir 17:1827.01.20

Facebook does not intend to fact-check statements made by Israeli politicians nor take action to limit the spread of lies told by candidates ahead of the March election, according to Jessica Zucker, product policy manager at Facebook. Facebook's job is not to censure politicians, Zucker said Monday during a meeting with Israeli journalists, held at the company's Tel Aviv offices. Censuring political discourse will limit people's ability to be exposed to what politicians say, and it will also reduce politicians' responsibility for their statements, she said. The issue elicited extensive discussion, Zucker said. Ultimately, Facebook believes that fact-checking politicians' statements constitute censorship.

 

Several senior executives from Facebook's global operations were present at the meeting, convened to present the different actions the company is taking to limit foreign interference and distribution of fake news during the upcoming election. One of the most controversial issues that came up during the meeting was Facebook's problematic policy of not limiting false statements by politicians as long as they adhere to community guidelines, and of letting political figures spread fallacies via ads on Facebook.

Facebook product policy manager Jessica Zucker. Photo: Tomer Poltin Facebook product policy manager Jessica Zucker. Photo: Tomer Poltin

 

According to Zucker, if a politician shares content that has already been checked and found to be false, his or her ability to spread it will be limited, like it would be for anyone else. The post will have a warning that makes clear that the content is false and misleading, and a politician will not be able to add it to an ad, she said. But in the case of original content created by politicians, Facebook will not take any action to verify the content, limit its spread, or prevent it from being included in ads should it be found fictitious.

 

Guido Buelow, strategic partner development news EMEA at Facebook, took a stab at explaining why the company has not gone the way of Twitter to forbid political ads altogether. Banning something completely is much easier, he said, but Facebook wants to give people a voice; its platform is a strong tool for contenders and parties that don't have access to traditional media and thus supports democracy. In April, ahead of Israel's first bout of election, many new parties created pages, Buelow said—if they had no access to ads, they'd have no way to reach voters. But the company has invested much effort in creating transparency for political ads, he said, adding that drawing the line between what should be allowed and what should not in political ads is difficult.

 

According to Jordana Cutler, Facebook's head of policy in Israel, the first election of 2019 demonstrated why political ads are needed, because small sums enabled parties and candidates to reach large crowds. Facebook is proud of this, she said, especially since the company hardly makes money from those ads—around 0.5% of the company's revenues. The criticism Facebook receives from countries, governments, and the press is not worth the money, she said, but Facebook is doing it because the company wants to do something positive and enable open discourse.

 

Most of the meeting was dedicated to showcasing Facebook's policy and guidelines for the upcoming election, with much of the focus centering on the fight against fake news. Fighting misleading information is one of Facebook's main pillars, Zucker said; while the phenomenon is not new, what is new is how far that information can spread using Facebook, and the ramifications, she said. But while Facebook takes responsibility, it does not believe it is up to the company to decide what is right and what is not, and that is also why the company works with experts.

 

Facebook takes a threefold approach to the issue, Zucker said: removing certain content, limiting the spread of other content, and presenting as much verified information as possible in the right context. The company removes videos in which artificial intelligence was used to make it seem as though people are saying things they did not, she said. The company also removes any misleading information about voting or participation in the election process, such as false information about the date, voting location, or eligibility. The company is also attempting to understand local nuances, aiming to identify supposedly innocent content that could incite real-world violence, she said.

 

Fosco Riani, public policy associate manager EMEA at Facebook, said that only a small part of Facebook activity violates guidelines. Of that, most violations are scams or spam intended to net monetary gain, he said. Only a fraction of scams are operations designed to sway public discourse or election results, but their potential impact means the company must do whatever it can to restrict the activity of those users, he said.

 

According to Fosco, Facebook removed 50 networks of bots or fake accounts in 2019. These networks are not removed due to the content they create or distribute, but rather based on their behavior, he said. This means: do they hide their identity and make it seem like they are operated by people other than the actual operators. Once they are removed, he said, Facebook works to develop products that will make their operation harder.