Unmasking the potential and risks of facial recognition technologies
While FRT is ostensibly as useful a tool in combating crime as facial masks are in stopping the spread of Covid-19, its inherent ethical issues and quickly changing science could muddy the waters for its future uses
Despite a 2019 Pew Research survey suggesting that more than half of U.S. adults trusted police use of FRT, Amazon, IBM, and Microsoft have all recently pledged to cut back on their commercial efforts in the field.
Amazon even called for a one-year moratorium on providing the technology to police forces, although it did leave open the possibility that it will continue to sell it to federal agents.
In addition to such business decisions, some U.S. states already ban the use of the technology in police body cameras, and the cities of Oakland, San Francisco, and, most recently, Boston, have all completely banned the use of the technology in their cities by both Federal and local police.
Indicative of the mood in the political sector driving these changes, a recent California law seeking to regulate FRT has been criticized for not going far enough in reigning in what some perceive as an “inherently authoritarian technology.”
A further example is a new proposed federal bill, entitled the Facial Recognition and Biometric Technology Moratorium Act of 2020 and introduced by the whimsically coupled senator duo Markey and Merkley. The bill aims to temporarily ban the use of FRT by federal law enforcement.
But why is FRT suddenly so despised and feared? For one, while ostensibly a useful tool in combatting crime and violence, there have long been concerns that FRT is especially biased against minorities. Now, with the growing backlash against aggressive policing tactics in general, some see FRT as just another example of police overreach.
As discussed in a previous column, there is a whole gamut of FRT available to the police, including, most recently, an unpublished research project—harkening back to the early days of mugshots that were originally intended to differentiate different types of criminal behavior based on facial features, —that claims to extract information regarding criminality from facial analysis "with no racial bias." This is not the first such attempt in the not so distant past and a number of academics have signed petitions to stop this type of research.
However, even if all FRT worked as well as promised and without bias, there are still concerns that the police themselves abuse the technology by over-estimating its probative value in their police work. A recent example involved the arrest of a black man on his lawn in front of his family for jewelry theft by Detroit police officers. Robert Williams was held for more than a day in police custody even though it was established early on that the arrest was reportedly based solely on a pixelated image extracted from a video that was then used to search for a match in an FRT database. This reported abuse occurred even though police procedure itself had disallowed arrests based on FRT alone, noting that FRT does not rise to the level of probable cause necessary to arrest someone.
FRT suffers from major ethical challenges that need to be overcome before it becomes part of the police’s standard operating procedure. The first is technical in nature. With much of the training data for the advanced artificial intelligence algorithms underlying FRT based on images of white males, the system tends to develop false positives and negatives when faced with people of color. This is especially true for Native Americans and black and brown women, and there is an abundance of embarrassing displays of the resulting AI bias.
Not only is this bad press for the technology, but it can result in challenges to its admissibility in court by tech-savvy lawyers or, worse, a failure to challenge its admissibility by overworked or less savvy public defense attorneys representing the poor and members of minority groups.
Another ethical challenge concerning FRT was on full display in the aforementioned Detroit jewelry heist. Police tend to ascribe too much weight to FRT matches, allowing the technology to substitute for real police work. Notably, similar complaints regarding the strain that technologies place on due process have been raised every time a new forensic technology is developed.
Additionally, the speed at which this technology has developed has, like most areas of emerging technologies, led to a paucity of technical standards, as well as a lack of rules and regulations outlining how, when, and why such technology can be implemented in both the public and private sector. Although there have been numerous efforts to catch up.
There is also a visceral distaste for FRT, as people fear that it converts decentralized, albeit pervasive, closed circuit and smartphone cameras into a centralized monstrous surveillance regime that is effectively always looking for you. Given the obscene amount of cameras in society today, not only is it just a matter of time before the Loch Ness Monster is finally caught on camera, there are bound to be many success stories of FRT catching criminals. But, some might argue, these easy wins simply feed into a feedback loop that pushes forward this surveillance creep into increasingly more personal aspects of our lives.
All is not lost for FRT. The coronavirus (Covid-19) crisis may provide an alternative use of the technology, one with less political baggage. A number of companies are looking to utilize it to assess whether people are wearing their masks in public as mandated in Israel and various jurisdictions.
- Covid-19 has uncovered some compelling 5G use cases
- Contact tracing apps' first task is winning the public's trust
- Makeup artists can teach an AI a thing or two about protecting its copyright
Although citizens may spew an inordinate amount of reasons for why they should not have to wear masks, the broad police powers granted to states by the U.S. Constitution likely give them sufficient support for these endeavors in favor of the public’s health. The U.S. Supreme Court rulings provide enough examples to support this, from Thurlow v. Massachusetts (1847) and Jacobson v. Massachusetts (1905) up to the recent case of South Bay United Pentecostal Church v. Newsom (May 29, 2020). And, if the law is not reason enough for you to wear a mask, perhaps you would do it to get an edit button on Twitter.
But masks are confusing to people. While fake news has been a problem throughout the pandemic, there is perhaps an even more sinister, albeit less malicious, type of news: muddled news. Muddled news can result from when our news cycle moves at the speed of science. While science seeks truth, it often slowly snakes its way through numerous less than optimal hypotheses before a reliable truth is discovered and finally reported to the public. Just consider the inconsistent health benefits associated with coffee.
But now, with all the research results coming at news consumers in real-time, many are struggling to keep up with all the new and contradictory information, resulting in muddled news.
Initially, for example, the general public was strongly advised to eschew masks, leaving them only for the frontline workers and the sick, with the claim that healthy people would gain nothing from using them. Now, we are hearing the opposite. What happened is that we experienced a real-time scientific paradigm shift.
As more information on the way the virus is spread became available, researches and policymakers began shifting away from focusing on protecting the wearer and from the notion of masks as personal protective equipment (PPE), as it is still used in the medical field, towards a perception that equipment is needed to protect everyone else from you.
These changes in science-based policy, particularly when they are rapid and inherently contradictory, can erode trust in policymakers and science, which can have long term effects that will stay with us long after Covid-19 is gone.
The same might be argued regarding FRT. If we pivot too quickly and too far away from our prior positions, we may find it hard to go back to this promising technology when the science eventually advances.
Dov Greenbaum is a director at the Zvi Meitar Institute for Legal Implications of Emerging Technologies, at Israeli academic institute IDC Herzliya.