Expert

Jeff Bezos, Meghan Markle, and You Have This in Common: Privacy Concerns

Passwords are both easily cracked and forgotten but available alternatives, such as biometric facial recognition technologies, raise their own prominent concerns

Dov Greenbaum 11:1331.01.20
It is time we face it, passwords suck. Even without all the different and sometimes inconsistent rules about password formation (with special characters, without special characters, with uppercase, with numbers, at least eight characters, or not) they are easily forgotten, and often a pain to change. This is especially the case when your internet browser logs you in automatically for extended periods of time and then, suddenly, you are forced to log in on a new browser and recall all the forgotten but pertinent login information.

 

With the need to log in to often hundreds of sites, we tend to reuse and recycle our passwords across platforms. But, it turns out, that this is a really bad idea. Sites and their associated user names and passwords get hacked all the time. John Chambers, the former CEO of Cisco Systems, famously told an audience at the World Economic Forum that there are only two types of websites: those that have been hacked, and those that do not know that they have been hacked.

Duchess Meghan Markle (right) and her husband Prince Harry. Photo: EPA Duchess Meghan Markle (right) and her husband Prince Harry. Photo: EPA

 

Once hacked, passwords and email pairs are often collected and then made easily accessible on the darknet and, sometimes, even on the regular web. This means that if you are reusing an email-password combination on multiple services, chances are one of those sites has been hacked, and that combination is available for the taking; as are the other multiple accounts where you lazily reused the same password.

 

If you think your password combinations are safe, consider this: earlier this month, the Federal Bureau of Investigation (FBI) seized one of the largest and easily accessible sites, Weleakinfo, which claimed to have billions of email-password pairs for sale, ostensibly compiled from over 100,000 separate data breaches.

 

Because of these and other cybersecurity concerns, many organizations are looking for alternative, more robust, easier to use, and harder to lose security protocols to replace the ubiquitous but inept password. One of those alternatives is biometrics, such as facial recognition technologies.

 

Facial recognition technologies show substantial promise in many areas of society, from catching criminals—earlier this week, the U.K. announced that it will be introducing limited real-time facial recognition cameras—to replacing those ever-problematic passwords. But, facial recognition technology raises its own set of issues from biases, to misuse, to rampant unwarranted constant surveillance.

 

The European Commission only recently began looking into how facial recognition technology can be employed without impinging on its citizens' fundamental rights. There have even been recent reports that the European Union is considering a moratorium on the use of facial recognition technologies in public spaces until the technology can be better regulated. 

 

Reports also suggest that some companies, like Alphabet Inc., Google's parent company and longtime advocate against the use of the technology, agree with the idea of a ban, while others, like Microsoft, support a more finely tuned approach to using, while regulating, this increasingly popular technology.

 

A handful of jurisdictions, including San Francisco and Berkeley, California, have already enacted their own broad bans on the use of facial recognition technology by government agencies within their municipalities. California also recently passed a relatively narrower ban, placing a moratorium on the use of facial recognition technology by law enforcement for three years. Notably, this ban effectively killed off San Diego's mostly failed and initially secretive attempt at using facial recognition to solve crimes.

 

Other states went even further in regulating facial recognition technology. Illinois, for example, has an arguably overzealous anti-biometric law that has ensnared a number of large internet companies. Just last week, the U.S. Supreme Court declined to hear an appeal by Facebook regarding a multi-billion dollar class-action lawsuit brought under this law.

 

But, no matter your position on moratoriums, bans or possibly excessive regulatory oversight, there is no doubt that the technology is at risk of being abused.

 

Last week, the New York Times reported that, Clearview AI Inc., a relatively small and secretive company, with funding from libertarian Silicon Valley billionaire Peter Thiel, has scraped billions of images off of millions of websites, even videos and pictures posted on YouTube and Facebook. Clearview applied its reportedly unvetted facial recognition software Smartcheckr to scan for faces within the images to develop a database composed of billions of faces.

 

Clearview is not necessarily doing this for malicious purposes, though. The company has sold access to its database to purportedly over 600 U.S. police departments who have used the information to solve crimes ranging from shoplifting to murder, with some cases resolved in under 20 minutes, thanks to Clearview's database. Clearview claims that it is able to match between people in photos submitted by police and people in its large database up to 75% of the time. Not all police departments like the technology. New Jersey, for example, has barred its use. However, in some cases, even when the department refrains from using the technology, individual police officers reportedly still use it.

 

However, Clearview may also have other uses for the data it collects. Clearview employees may be reviewing and monitoring the images sent by the police, which could seemingly constitute abuse of privacy and protocol. Clearview might also be adding the police submitted photos to its database. Although the images are ostensibly of yet to be charged or convicted individuals, who are innocent until proven guilty, Clearview likely sees them, potentially unjustly, as a valuable database of persons of interest. Clearview is now also facing a class-action lawsuit in Illinois under the aforementioned anti-biometric law.

 

The police are not the only ones employing facial recognition. The U.S. Department of Homeland Security recently released its assessment on broadly employing facial recognition for all travelers using the popular expedited Global Entry system at U.S. borders. The technology is already employed at over 15 airports.

 

To deal with the surge in the use of facial recognition, some experts are already advising users to change their Facebook privacy settings to keep Clearview, and companies like it, from scraping any more of their private pictures.

 

In light of all this and perceiving the increasing creep of intrusive facial recognition technology, some citizens are taking their concerns into their own hands. While many jurisdictions have anti-mask laws that do not allow people to walk around with their faces covered in order to avoid being recognized by cameras, there are some newer technologies that seek to foil, confuse, and otherwise fool facial recognition cameras with seemingly unremarkable devices. For example, you can now buyinconspicuous hi-tech glasses that prevent facial recognition cameras from recording your face.

 

While it is not always possible to inconspicuously protect your face from privacy intrusions, many argue that the alternative of preemptively developing blanket bans on facial recognition technology will stymie potentially useful innovation. They also claim it will, in the end, not protect our privacy, instead leading to a protracted whack-a-mole game where surveillance professionals will switch to other, perhaps even more intrusive methods of identification, every time the technology du jour gets banned.

 

This expansive privacy threat requires privacy laws that are technology agnostic. Last week, several technology billionaires at the World Economic Forum in Davos advocated for something of the sort in the form of new privacy laws that protect our personal data and prevent it misused. Concurrently, the U.S. National Institute of Standards and Technology released its own privacy framework to promote ethical engineering practices, particularly concerning personal data, regardless of the technology or nature of the enterprise.

Our privacy is not something to be taken lightly. To appreciate the extent to which the lack of privacy can push someone, look no further than the recent Megxit, where a prince and a duchess gave up their fairytale tax-payer supported lifestyle as a result of an expansive and relentless invasion of their privacy by the British tabloids.

 

Finally, if you think that your own efforts at cybersecurity will protect you without government oversight, just consider the recently purported rumors of the Saudi Crown Prince hacking one of the most technologically savvy individuals in the world, Amazon CEO Jeff Bezos. I wonder what his password was.

 

Dov Greenbaum, JD-PhD, is the director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies at the Radzyner Law School, at the Interdisciplinary Center in Herzliya.