The fight against facial recognition and its reflection of systemic racism in algorithms and the training data sets fails to see the context for why the technology will likely continue to be developed. The bigger issue isn’t facial recognition but the “fake it till you make it” culture of software development that ships tools that aren’t ready, to markets that don’t know how to adequately evaluate their acquisitions.
Employing racially-biased facial recognition software for policing is unjust and wrong. Using facial recognition to tell you who is at the door from your captive list of identified friends and family is much more morally ambiguous. That Apple announces facial recognition technology as part of its June Home Kit announcement implies that they don’t put that offering into the same camp as the Amazon, Microsoft and IBM technology recently put on hold because of the clear racial bias built into their training sets and algorithms (see The two-year fight to stop Amazon from selling face recognition to the police, MIT Technology Review). That means that at least for now, the refinement of facial recognition will continue for other uses that don’t so clearly impinge on civil liberties like finding lost children, human trafficking, and industrial building security.
Facial recognition is probably not going anywhere. Those currently fighting against it employ a single-use case, a limited data set. And they also often reflect a confirmation bias to their argument—exploiting facts that reinforce and ignoring those that don’t. I do not believe they are wrong, but their myopic U.S.-centric argument against local policing and control proxies don’t’ account for the global scenarios associated with facial recognition. Thinking that to curtail the use of facial recognition in policing today future proofs against facial recognition in other contexts encapsulate this moment into a historical bubble that will likely pop sooner than later.
The politics of change, future threats, and the global landscape all suggest that like most other technology associated with the balance between safety and security, facial recognition will remain in the arsenal of governments and policing because it serves a purpose and because we can’t be the only country that pushes back against it. As in most technology, the loss of capability in one area reduces readiness and agility in others.
The base case for facial recognition
The issue of facial recognition for policing and other policing proxy applications, such as apartment building or mall security, requires reconsideration. I say reconsideration rather than wholesale disengagement because the technology does have positive uses, and abandoning it might leave the world less safe.
At the core of the current issue is facial recognition software that fails to adequately identify people of color, in particular black men and women, at significantly higher rates than white people. How software known to have this bias was still sold after discovering it remains an issue for deeper investigation. It should not have been sold beyond pilots, and failed pilots should have resulted in feedback but not deployment.
That similar software has been offered, for instance, to monitor traffic at an apartment building is appalling.
All the negative use cases reflect unprofessional trials, deceptive sales, and immoral deployments. Subpar practices result in a market of miracles built atop a dubious foundation. Anyone in the software industry knows you can make almost any beta software meet a customer’s expectations during a demo as long as the person running the demo follows the script. Bad software should never be sold. Facial recognition is not the only bad software. Zoom’s recent encryption issues and its Facebook data sharing, and software breaches at major banks, insurance companies, and credit bureau attest to software problems that stem not from malicious use or intentional sabotage, but from complexity, poor processes, myopic program management, and faulty testing.
And even if the firms guilty of these abdications of responsibility get out of the market, they are far from the entirety of the market. Where a need exists, businesses will form to serve it.
All the negative use cases reflect unprofessional trials, deceptive sales, and immoral deployments. Subpar practices result in a market of miracles built atop a dubious foundation.
What’s in a name?
In many cases over the last few weeks, facial recognition software as the culprit quickly decoupled from the broader class of technologies normally attributed to artificial intelligence. In the 1990s people used to say that, “if it works, it isn’t AI.” That probably still applies, but with the added qualifier that if it taints AI it is out as well.
Markets for facial recognition exist beyond law enforcement
Large firms can afford to take a moral high ground. Some firms, like SenseTime, FaceFirst, TrueFace, Face++, Kairos, and Cognitec, where facial recognition is their primary business, will still need to find buyers for their products. The wide range of use cases, as well as the varied approaches, suggests that facial recognition accuracy will continue to improve even if uneven data sets retain the underlying bias general applications. For industry, switching the context internally changes the equation. Rather than seeking the unknown, industry may be more interested in confirming the known–verifying that they know a person rather than attempting to ascertain who a person is.
A facial recognition application might create an alert to security anytime someone who is not known enters a facility. That refocuses the question. In an industrial context, people may even opt-in after holding a session to ensure that the facial recognition system accurately identifies them correctly to some threshold—which again, for an inclusive application is good enough because false positives are easily ameliorated in this context. If, however, testing reveals failures and bias in recognition, the software should be avoided. Buyers and vendors need to own the responsibility and accountability for only deploying software that meets needs with high quality.
The bottom line is that for large software firms, facial recognition is not a core business and dropping development, or pruning its customer list, doesn’t have that much impact on its business. For dedicated software firms, they will likely continue their work in contexts outside of U.S. law enforcement (including law enforcement applications in other countries). Facial recognition technology serves too many use cases with potential value to disappear.
The politics of change: threats foreign and domestic
And context changes. Today we face an issue of domestic policing and systemic racism. On 9/11, the threat was non-domestic terrorism. Technology, including facial recognition, that identified known terrorists going through the air travel system was not so frowned upon.
While backing away from flawed facial recognition with temporary moratoriums proves politically expedient—it is also a waiting game. Should the context shift, the algorithms developed by these firms, redeployed to solve other problems, could easily be retrained on new data sets and again deployed for domestic law enforcement. Bans can be overturned. What is acceptable or not can be shifted by the politics of the moment. We see this in Presidential executive orders that place politics over consistency and predictability. Presidential executive orders are often used to change contexts set by a previous administration, establishing new contexts aligned with the current administration.
Facial recognition technology is neither right nor wrong at its core. All deployments must balance between threat and safety. If under some future threat a trusted supplier emerges with a facial recognition solution to the threat, it is likely that the body politic will reconsider its opposition to the technology because the immediate threat proves more existential than other conditions.
Facing up to a global context
Agreements to curtail the use of facial recognition in policing means little to foreign powers. Even if the U.S. were to step up and disavow facial recognition across all military and non-military applications, there is no reason to believe that the Russian, Chinese, or any other government would follow.
A government always desires better weapons than its enemies, and it will seek parity where it cannot dominate.
The promise of parity will likely drive the military to continue facial recognition development in secret programs that by design don’t make national headlines. A government always desires better weapons than its enemies, and it will seek parity where it cannot dominate (as an example, see UK spies will need artificial intelligence from the BBC, 27 April 2020). Without a global agreement to stop the use of facial recognition software, the governments of the U.S., Europe, and other nations will continue in order to remain competent at developing facial recognition software, if not to deploy it, then to confound it. And we all know that international agreements also exist at the whim of current administrations.
Facial recognition’s implications for other AI applications
Facial recognition essentially leverages pattern recognition algorithms to do its work. It combines advanced imaging algorithms to “see” an image, and large training sets to teach a machine learning algorithm using either human-reinforced learning or more automated approaches, known as unsupervised learning. There is nothing different about human facial recognition algorithms, save a few tweaks of parameters, than those used to identify pictures of cats.
Pattern recognition also supports other functions prone to discrimination, such as loan and insurance risk analysis, and job application fit. Facial recognition may be the most visible technology that reflects a biased data set today, it is not the only pattern recognition derivative with flawed data sets and inadequate testing and quality procedures.
While the algorithms may contain parameters that contribute to racial bias, the data sets are the more likely culprit. In the race to deliver a customer solution against a competitor, or to deploy software that required a large investment, shortcuts get taken, and results suffer.
Beyond the failures of facial recognition technology in specific applications, larger issues of personal privacy must be considered. How, for instance, does one opt-in or out of a system that uses biometrics for identification? Europe’s GDPR requires explicit opt-in, while the California Privacy Act (CCPA) requires strong notifications and policies, but not an opt-in from the consumer. Within the CCPA framework individuals could ask, ”What data was used to suggest that this is me?”
And that raises the issue of facial recognition and transparency.
Facial recognition and transparency
Individuals may be able to sit in front of a camera and confront their facial recognition accuser. The system would be able to show them the points of identification mapped onto their face, and the matches in the database that most fit their facial pattern. They may even be able to discover what specific data in a training set was used to build the model of them. They will, not, however, likely be able to inquire as to why the machine learning algorithm made a mistake—what data it used, discounted, or interpreted.
The human brain can’t do that either. A person would just say, “Well, I thought the person I saw looked like you.” They will not be able to deconstruct their organic facial recognition pathways or stored data in their brain. A 2017 paper by Tomas D. Albright, titled “Why eyewitnesses fail,” explores the problems with, and caused by, faulty human eyewitness identification (read the article here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5544328/).
No one will ever develop a facial recognition system that functions at one hundred percent accuracy under all circumstances. This does not mean that we should abandon the technology. What it does mean is that where the technology can provide value, humans need to take responsibility for its use, and be held accountable for decisions made if they fail to question the efficacy of the information being provided by the system and that causes harm to others.
Conclusion: Humans need to take responsibility
I love software. I am not afraid of any software or its functions. Software has no inherent morals. Unexecuted software is just inert bits on a disk. Software performs its analysis, gaming simulation, or offers productivity support because a human-designed it to function in a certain way. Software has no choice but to function according to its instructions.
While pattern recognition can employ unsupervised learning as one method of training, those who approve or develop the training sets should know how to test what good looks like. Just because an algorithm identifies patterns within the data set or in an adjacent data set, it does not mean it will work equally well for all future data sets. Humans ultimately make the call about quality. Applications fail or succeed based on what they are asked to do, what data they have available to draw upon. Humans set the conditions of the test and evaluate the results.
Those who build software, however, are moral animals. That means the application of software is always the issue, not the software itself. And yes, this argument echoes those applied to guns and alcohol. Software doesn’t discriminate against people, people discriminate against people. And like guns and alcohol, the context matters. If police use flawed facial recognition as an element of their justification to impose more restriction on one group than another, that use of software is no different than a hunter arguing that they need an assault rifle to kill a deer. Both reflect a disregard for scale and constraint, of selective use of facts to make an argument. Ignorance is also not an adequate defense. Saying that one does not know that a piece of software produces biased results is no better an argument than the gunman who failed to understand that shooting a bullet into the sky might result in killing someone on the ground.
In the end, the argument against facial recognition falls into two categories of intent. The first is the intent to deceive buyers that the technology is ready and works with accuracy for the application in question. For the most part, that form of intent reflects poorly on the software industry where the idea of “fake it till you make it” often prevails with little research or acknowledgment of the risk associated with shipping software that doesn’t work (this same form of intent applies to Boeing’s moral failings with the 737 Max software).
The second form of intent is one where technology is used knowing that its outcomes are evil. That is why there are degrees of murder. Killing someone “in cold blood,” in the first degree, reflect a premeditated intent to kill, often viciously—compared to accidentally killing someone by shooting a gun into the air describes degrees upon a spectrum. Both are crimes.
It is highly unlikely that police departments willingly adopted, or software companies willing sold, facial recognition software with the intent of misidentifying black Americans. That it was known to do so, by police or by a software vendor, creates culpability. Having knowledge and doing nothing exacerbates harmful intent.
For victims, the form of intent takes does not matter. They still suffered an injustice that could have been preventing by elevating the intent of buyers and sellers to protect and security and dignity of all involved, and to only apply technology that meets that intent.
Knowing that facial recognition software misidentifies black Americans, by police or by a software vendor, creates culpability. Having knowledge and doing nothing exacerbates harmful intent.
Like all moral concerns, facial recognition raises issues far beyond the immediacy of the first instance. Those seeking its long-term curtailment should refrain from arguing from single cases to make a broader argument. Humanity is on a road toward symbiosis with pattern recognition software. The future of the automobile industry, for instance, is betting on it as a key component for self-driving cars. Those self-driving cars may not recognize individuals, but they will need to be able to identify humans. A weaker form of a thing usually means a stronger form exists in order to make the weaker one better. Progress rarely suboptimizes on purpose.
Progress rarely suboptimizes on purpose.
For all machine learning, regardless of its application, the software development world needs to step back and ask how to test accuracy, how to ensure safety, how to uphold civil liberties, and how to protect privacy before they deploy solutions. In some cases, those questions may lead to the conclusion that the context outweighs the benefits—there may be no way to create accuracy without impinging on civil liberties. Developers then need to ask if development down that path should continue, or if another road less traveled, less risky, will prove the better one.
In the end, society will need to decide the right set of tools required to maintain peace and ensure justice. That set of tools will continue to evolve. The moral questions we ask about facial recognition need to be applied to all technologies, from weapon systems to software (and the increasing combination of the two). The cliché of “just because we can, doesn’t mean we should,” applies in all cases of technology that makes a change to the human condition. We need to carefully consider our propensity to overvalue productivity. We need to place doing the right thing above doing the thing fast much more often. And we need to remember that not all the inhabitants of the world will share those values, regardless of how right or closely held they may be.
Additional background articles to consider:
IBM Abandons Facial Recognition Products, Condemns Racially Biased Surveillance, NPR
Opinion: IBM, Amazon moves on facial recognition are good baby steps toward removing bias, MarketWatch
Microsoft won’t sell police its facial-recognition technology, following similar moves by Amazon and IBM, The Washington Post
Facial Recognition cover image from flickr.com via www.vpnsrus.com.
For more on AI and ethics from Serious Insights, click here.
Leave a Reply