China, Russia, and the Galactic Empire All Have One Thing in Common: Internet Censorship
Rather than deposing dictatorships, the internet, through fake news on one hand and legal censorship on the other, now seems to empower them
Let us examine this claim, without any spoilers: internet censorship in Star Wars is present from the first film released, Episode IV—A New Hope, where whistleblower data has to be smuggled in the form of a corrupt hologram, to the most recent film, where an important plot point hinges on a core technology that has been partially locked down with digital rights management tools to prevent long-outdated uses of what the government once considered to be seditious information.
Clearly, internet censorship does not exist just in film. It is a very real tool used by regimes both despotic and democratic to control information and punish infringers. Newsworthy uses of censorship range from the air-gapped intranet of the North Korean hermit kingdom, which is totally cut off from the rest of the world's internet, to countries like China, Vietnam, Tunisia, Cuba, and Ethiopia's limitations on access to various websites that contradict their regimes' message.
China, for example, has just released new regulations through its Cyberspace Administration, requiring online content companies to promote "positive" content, that is in line with the state's policies. Iran, on the other hand, recently chose to limit its citizens' ability to share content with the rest of the world, mainly to prevent information regarding its recent bloody crackdown on protesters to get out. India, considered the world's largest democracy, is, in fact, one of the worst offenders when it comes to internet censorship and has cut off internet access from various municipalities, on at least 90 recent occasions, as a seemingly punitive measure.
India is likely to soon have even more restrictions on internet access with its new proposed internet rules. These new regulations would allow the government to demand websites take down posts it deems hateful, libelous, or deceptive. While claiming to protect the privacy of citizens, the law would also ironically weaken the privacy protections provided by encryption in services like Facebook's instant messaging app WhatsApp.
Similarly, Russia recently passed a sovereign internet law that will allow the government to block internet access in parts of or even all of Russia during emergency situations. The law would also allow Russia to better censor or block internet content, with or without telling the citizenry and even without judicial consent. And to further cement its control over internet access, the Russian government recently passed another law that makes it mandatory to pre-install Russian apps, from a government-issued list, on all computers, tablets, and smartphones sold in the country.
To put the final nail in the country's free internet's coffin, Russia has just successfully tested a countrywide intranet. Nicknamed Runet, it would serve as a splinternet—a network wholly controlled by the government, limiting how Russians interact online with the outside world.
One of the most interesting developments in regulating citizens’ internet access is manifested in a law recently passed in Singapore that is designed to control and restrain the scourge of fake news. The law has been implemented in only a handful of cases, but some have argued that any government limitations on information are censorship and that readers of social media, and not their governments, should have the opportunity to make their own determinations of what is and what is not fake news.
Put in a difficult position, Facebook, in response to requests from Singapore's government to alter online posts it deems as "fake news," decided to add a note to relevant posts stating it was "legally required to tell you that the Singapore government says this post has false information."
However, even those opposing government censorship, might want to consider the unfortunate results of two prominent fake news cases. The first, a story that has been bouncing around Facebook for a couple of years, relates to a purported human trafficking ring that employs non-descript, but incredibly common white vans, with even the mayor of Baltimore falling for it and mentioning it as a serious concern in a recent interview. Attempting to fight such trends, Facebook has employed third-party fact-checkers to guide users regarding whether or not they can trust certain viral stories.
The second more egregious case relates to a similar WhatsApp pervasive rumor in India that suggests that strangers may be on the hunt to kidnap children. This rumor has resulted in the beating of around 150 men and women, at least nine of which have subsequently died, who are seemingly guilty of nothing more than being strangers.
Arguably, WhatsApp has tried to limit the effect of viral messages like this on users, and while the end-to-end encryption of each message on its servers makes it nearly impossible for WhatsApp to know the content of our sometimes malicious messages, it does have metadata that could help, including how many times a message has been forwarded. WhatsApp now intends to make use of this information to let users in some jurisdictions know when a particular message has been forwarded a lot; savvy users might then think twice about forwarding something that seems too viral to be accurate.
The fact that we tend to believe these stories can be explained by established brain research. Our brains are cognitive misers, constantly looking for shortcuts when assessing new information. These shortcuts are, however, not always reliable. In fact, research has shown that implausible statements that are initially rated as untrustworthy are perceived to be more reliable upon encountering them for a second time. As per Noble Prize laureate Amos Kahneman “intuition is nothing more and nothing less than recognition.” So, the more viral a Facebook or Whatsapp thread is, the more likely it is we will see it more than once, increasing the chance that we will eventually believe it.
If our brains are indeed hardwired to believe fake news, then we ought to find other ways to deal with the problem. Unfortunately, many of the proposed and heretofore unproven technological solutions raise serious concerns in and of themselves, including the potential chilling of free speech and the possible use of these same tools to surveil people.
Also, fake news may not necessarily always be so dangerous and insidious. For example, earlier this year, MIT's open agriculture food computer project, which aimed to optimize growing conditions for food, turned out to have been based on fake data, but no one died or was assaulted as a result.
Whatever form fake news takes in the new year, Snopes, the self-described definitive internet reference source for urban legends, folklore, myths, rumors, and misinformation, is already looking to increase its funding to deal with what it expects to be an avalanche of misinformation in 2020.
But while social media helped users employ the internet to unite much of the Middle East in the Arab Spring and other efforts to overthrow despots, the same social media is now used to divide us through the rampant spread of fake news and fake facts, undermining trust in each other and our institutions. Unfortunately, ten years later, rather than deposing dictatorships, the internet now seems to empower them, as well as bigots and racists of all stripes.
Dov Greenbaum, JD-PhD, is the director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies at the Radzyner Law School, at the Interdisciplinary Center in Herzliya.