Startup With Israeli Intelligence Bona Fides Offers Defense Against Fake News

Tel Aviv-based Cyabra uses a deep learning and big data-based algorithm to identify patterns typical of fake profiles, avatars, or bots

Raphael Kahan and Asaf Shalev 12:4811.06.18
After years of experience mounting digital influence campaign in business and politics, a handful of Israeli entrepreneurs switched sides and founded Cyabra Strategy Ltd., a startup promising to help politicians and brands defend themselves from digital attacks involving fake news and disinformation.

 

For daily updates, subscribe to our newsletter by clicking here.

 

Cyabra's employees come from cybersecurity and intelligence units of the Israeli military, and from business intelligence companies that specialize in shaping activity on social media using fake profiles, avatars and bots. This experience helped the company develop a deep learning and big data-based algorithm capable of recognizing when a disinformation attack is happening.

 

Cyabra. Photo: Yaniv Shmidt Cyabra. Photo: Yaniv Shmidt
Founded in August 2017, Cyabra has raised about one million dollars, according to Cyabra co-founder and Chief Marketing Officer Sendi Frangi, and employs nine people. Earlier this year, Coca-Cola selected the company as one of 12 startups for the inaugural class of its technology commercialized program, called The Bridge.

 

"Cyabra was established at the right moment and answers a real need," said Rami Ben-Barak, a former deputy director of the Mossad, in a May interview with Calcalist. Mr. Ben-Barak, who had also served as the director general of the Ministry of Intelligence Services and the Ministry of Strategic Affairs, is a senior advisor to Cyabra.

 

"Fake news is a global phenomenon that negatively impacts the economic and the political domains," Mr. Ben-Barak explained. "In my eyes, it is currently one of the greatest dangers to democracy and to the Western world. We are becoming aware that many significant global events are being influenced by groups and interested parties that employ fake identities to create a false reality," he said.

 

Disinformation campaigns have been connected to several election campaigns in recent years, including the 2016 U.S. elections. In 2017, researchers from the University of Iowa found that some 100 million Facebook “likes” that appeared between 2015 and 2016 were created by spammers using around a million fake profiles. A more recent MIT study that looked at over 126,000 stories on Twitter between 2006 and 2016 found that fake news outperform true stories across all parameters, including audience reach and the span of time it takes them to spread. More recent privacy scandals, such as the one involving Cambridge Analytica, have also served to illustrate just how vulnerable social networks are to outside influence, begetting investigations in the U.S. Congress and the European Parliament.

 

Cyabra's technology works by searching for several factors that are characteristic of fake profiles, according to Mr. Frangi. The first is the date the profile was set up, as false ones are usually created a little before the launch of a campaign. The second is profile history—a three-day-old profile with 700 friends raises a red flag. The third is whether the profile has a wider digital fingerprint that’s characteristic of how real people use the internet.

 

"Anyone who is active online leaves traces that can be identified, be they a Google+ profile, a blog, an email account," Mr. Frangi explained. "A real person that is active online will be active in spaces other than their Facebook profile."

 

Cyabra can determine the scope and outline of a disinformation attack within a day or two thanks to the help of its AI engine. “20 analysts earning (top) salaries will find out the same information, but it will take six days and that’s too late,” Mr. Frangi said. Early detection is crucial for an effective response, as false information can tank a company's stock or put a candidate out of the running if it gains enough traction, even if it is ultimately proven false, he added.

 

As an example, Mr. Frangi brings up PepsiCo CEO Indra Nooyi. In November 2016, she referred in an interview to the election of U.S. President Donald Trump, saying many of her female, non-white or LGBT employees were worried about the ramifications of his policies. Soon after, several websites stated that she told Trump supporters to "take their business elsewhere," leading to calls for a boycott and sending the company's stock plummeting.

 

"The important thing is to change the narrative," Mr. Frangi explained. "If PepsiCo's CEO had understood in real time that she was under attack using fake news, she could've leveraged traditional media to counteract it. She could've gone on CNN, or the papers, and taken apart the campaign."
Cancel Send
    To all comments