Why Sharing Your Data Online Is Much Like Sharing Your Toothbrush
With the internet a bastion for rampant misogyny, bigotry, racism, and fake news, both democrats and republicans alike are pushing for more online censorship
Earlier this month, the Los Angeles district attorney warned travelers about the dangers of juice jacking, particularly when using USB chargers placed in public locations. According to the DA, criminals can easily load malware into public USB charging facilities or even the charging cables themselves at places like airports or malls. That malware can then infect phones and extract personal and private information. Plugging into a public port has been compared to finding a used toothbrush on the side of the road and using it when you have no clue where it has been or what diseases it is going to infect you with.
While some have accused the DA of using hyperbole in this case, it may be difficult to trust your internet searches on the subject. According to a recent report by the Wall Street Journal, state attorneys general from every U.S. state are currently probing the possibility that Google finesses its search algorithm in ways that might lead to biased subjective results. Google had denied the allegation.
But that is the least of our problems with Google and other internet giants, including Facebook, Amazon, Apple, Alibaba, and Netflix. According to a recent poll by the nonpartisan Pew Research Center, Americans are increasingly concerned with how search engines and social media sites are collecting and using their information. Pew found that a large majority of Americans believe that their online, and even offline, activities are being tracked by Big Brother as well as various commercial smaller brothers.
They are probably right. It is a longstanding dictum in the internet era that if something is free, like Google's services, Yahoo Mail or Facebook, then you are no longer a consumer, you are the product. While we as consumers generally know this, we do not seem to fully grasp it, as we nevertheless continue to use these services that trade on our most personal information. Maybe because these services have become too essential for everyday functions in today's society.
Given all of this, Democratic presidential hopeful Andrew Yang is taking aim at the business models of big internet companies. While less radical than those who want to tear down these monopolies by dismantling the companies themselves, Yang has proposed to tax the digital ads that are their chief revenue source.
Yang has also promised to appoint a Secretary of Technology in his future cabinet. This department would oversee new regulations on internet giants, such as allowing customers to opt-out of data collection or, like their European counterparts, give them the ability to easily delete their data or transfer it to competing websites. Yang also wants to have his government oversee the design of the various addictive devices and software, such as smart phones and messaging apps, that are the gateways for data collection.
Much of Yang’s promises rest on the notion that your data is your property, and that you should be able to restrict access or exert ownership control over it, like you can with physical property. This is not a trivial position. Legally, the metes and bounds of data ownership are far from clear-cut, and many intellectual property regimes struggle to accommodate ownership of factual data.
It is not just Yang and the Democrats. Representatives from both parties are increasingly trying to reassess existing regulations associated with content hosting digital platforms regarding user-generated content.
Section 512 of the Digital Millennium Copyright Act provides safe harbor for infringing content. Section 230 of the Communications Decency Act, infamously provides safe harbor for those platforms even if they host repugnant or false content, reading: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
According to the drafters of the original legislation, the intent of this immunity was to shield content hosting services from liability for failing to censor all of their hosted content if they attempted to expurgate some of that content.
Section 230 also determines that users and service providers will not be liable for removing "in good faith" content they deem as obscene, even if it is speech protected by the constitution.
In effect, Section 230 is a Good Samaritan law for Big Tech, sheltering companies from liability associated with failing to provide enough censorship while they were attempting to provide some censorship. And, like all Good Samaritan laws that are intended to encourage bystanders to help without the fear of repercussions for their mistakes, Section 230 incentivizes internet platforms to promote all manner of free speech online by shielding them when they fail to censor enough damaging speech.
With the internet a bastion for rampant misogyny, bigotry, racism, and fake news—all available within incredibly politicized social media echo chambers—many, disregarding the more basic, precedential, and foundational concept of free speech, see the immunity provided by Section 230 as at least partially responsible for everything from repeated mass shootings to the election of President Donald Trump. Today, paralleling the broad bipartisan support of Section 230 when it was enacted, there is now, arguably misinformed, bipartisan support for dismissing it.
The irony is that the people who want to tear down Section 230 are also often those who support weakening the big web companies in the hopes of promoting more competition. Seemingly unappreciated by the growing anti-Silicon-Valley wing of the Democratic Party, is the fact that the increased onus to conform to an unobtainable state where all online content from billions of users in tens of languages is effectively moderated for to remove offending speech will be even harder with Big Tech broken up. As it stands now, even the enormous resources of these giants seem insufficient for effective moderation.
Politicians should also be careful what they wish for. Given the social importance of platforms like Facebook, do we really want them to decide what is and what is not acceptable content? After all, more Americans would rather share their toothbrush than trust internet platforms.
Dov Greenbaum, JD PhD, is the director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies and Professor at the Harry Radzyner Law School, both at the Interdisciplinary Center (IDC) Herzliya.