Can Facial Recognition Technologies, Privacy, and the Freedom of Expression Co-Exist?
Tech policy experts Limor Shmerling Magazanik and Noam Rosen outline possible measures to reduce privacy infringement caused by law enforcement agencies’ use of facial recognition software
Facial recognition technologies can tell if two images are of the same person. They work by generating a unique profile of a person’s facial biometrics and matching it with the biometric image data in already tagged images. The technology is effective even if only part of the face is exposed, sometimes even when a person turns their back to the camera. It works on photos acquired both by still or video photography when that person is on the move. When the system matches between two facial profiles, the program generates a numerical value, which indicates the probability of the two faces belonging to the same person.
It may not come as a surprise that non-democratic nations are investing heavily in these technologies, but they are not alone: Chinese tech company Huawei Technologies Co. Ltd.’s website lists quite a few European cities’ as clients outlining their customer success stories in implementing facial recognition technologies in their smart city initiatives.
Most democratic states protect the right to demonstrate under the law or constitution, based on the understanding that demonstrations allow people, who have no access to decision-makers, to voice their opinion and impact public policy and agenda. In Israel, the freedom to demonstrate is an integral part of the freedom of speech, classified under Basic Law: Human Dignity and Liberty. The Israeli Courts consider freedom of speech a supreme right since it constitutes a precondition for exercising other rights.
Nonetheless, it is not unlimited. The right to demonstrate is protected only as long as people exercise it in peaceful ways and according to the instructions of law enforcement. When this is not the case, the police are authorized to take action to ensure the public order is maintained.
The debate on technology’s impact on democracy and human rights has intensified recently. The use of facial recognition technology could be claimed to discourage people from legitimately exercising their freedom of speech in what is referred to as “the cooling effect.” This argument is based on the notion that people feel at ease to take to the streets and express unpopular opinions anonymously but will likely refrain from doing so if they know they can be identified, even if no direct sanctions are involved.
Opponents to facial recognition technologies argue that regulation is needed, that the technology is being used disproportionately, and that other means to the same ends exist and have a lesser impact on privacy. They also stress that people are not giving their consent to be photographed and have no control over the biometric data the cameras collect. Worse still, research shows that some software programs base their matches on biased data, which may lead to false positives, in particular when it comes to people of color and women.
There are also valid concerns that the authorities might retain data and compile blacklists in a manner that infringes on human rights. Even more disturbing is the possibility of combining facial recognition with artificial intelligence to mine medical and mental health data from a person’s facial features. While the use of biometric identification on millions of people may help track down a few suspects thus ensuring the safety of many, it might also turn us into a society that is under constant surveillance raising grave concerns for democracy and the right to privacy. It is due to these very concerns that California decided to prohibit the police from using facial recognition technology in the police officers’ body cams, at least for the next three years.
While Israeli law stipulates that taking a person’s photo within their private domain constitutes a privacy infringement, one can argue that using facial recognition technologies in the public domain, where a person intentionally hides part of the face to make it unidentifiable, is the same as taking a person’s photo in their home without their consent.
Under particular circumstances, taking a person’s picture in public may be deemed as perpetrated in the private domain. In 2006, for example, Israel’s Supreme Court ruled in favor of a Haredi man who sued Israeli newspaper Ha’aretz and its photographer Alex Levac, after a photo of him was published by the paper, despite the photographer’s promise that it would not be. Only ahead-of-time, informed consent can legally prevent such a practice from constituting as privacy infringement, a condition that is very difficult to fulfill when photographing in the public sphere.
Paradoxically, however, even just requiring authorities to inform the public about the existence of surveillance cameras, to mitigate the privacy infringement, might deter people from attending a demonstration.
Nonetheless, privacy protection laws often exempt law enforcement authorities from their responsibility to privacy infringement, provided the infringement was reasonable and necessary to allow them to fulfill their duty. The U.K. Supreme Court recently ruled that the use of facial recognition cameras for maintaining public order and detecting criminals is lawful under both the European Human Rights Convention and the General Data Protection Regulation (GDPR). It is important to note that the British police made sure the cameras comply with the GDPR prior to implementing them. Among other measures, the British police held a privacy impact assessment and used the findings to design a framework for data collection, processing, and retention policies that would comply with the law.
To summarize, the legislator ought to ensure that facial recognition technology does not infringe on the right to privacy and freedom of expression in its quest to reap the benefits technology has to offer in terms of public safety.
Law enforcement authorities, for their part, need to spell out the purpose for which they implement the technology so that its use of facial recognition is only made when not using it would put the public in real danger. They must commit to collecting only data that is accurate, relevant, and essential to the stated purpose and not use the data for any other purpose. Whenever running a facial recognition program to identify specific persons, if the match returns a negative result, the police must purge the biometric data related to this mismatch. Whenever the system detects a match, a human examiner must be named to inspect it before taking further action, to minimize the chances of a false-positive. The law-enforcement body must also have rigorous information security controls in place and comply with all legal and regulatory requirements and restrictions that apply to the use of surveillance cameras in the public sphere.
Limor Shmerling Magazanik is the managing director of Tel Aviv-based policy think tank the Israeli Tech Policy Institute and a senior research fellow at the Future of Privacy Forum.
Noam Rosen is a legal advocate and a policy counselor for the Israel Tech Policy Institute.