The topic of facial recognition as a biometric authentication instrument has been available for a good while, as has the implementation of facial recognition by law enforcement at points of entry to large events and by immigration officials at border crossings.
The latter two instances very much fall within the rubric of surveillance and it is in this arena we see an expansion of both capability and implementation.
Law Enforcement Utility
For example, IBM has provided to the New York Police Department the capability to search its data base of surveillance photos by ethnicity and skin color, as well as the multiple facial feature points that have been the primary means by which facial recognition applications have been utilized. No doubt other entities also have been provided this capability.
Meanwhile, in Memphis, the Memphis Police Department is accused of using photographs and social media monitoring to assist in determining associations between individuals. This is not an unusual bit of police investigative effort; indeed, some would place it in the category of preventive policing. The methodology is used often in anti-gang investigations, associating tattoos or other unique characteristics that provide an indication of association. The ACLU, however, has called out Memphis on its use of this methodology to monitor groups engaged in protests, such as Black Lives Matter—which, as the Tennessee Tribune articulates, brings back memories of the 1970s when the FBI’s COINTELPRO (counterintelligence program) spied on domestic organizations and groups perceived to be subversive.
The San Francisco Bay Area Rapid Transit (BART) in mid-September 2018 updated its surveillance-oversight policy to enable the communities it serves to have a say regarding the technology it uses for the safety, security and policing the transit service. The technologies run the gamut from automated license tag readers, biometric identification hardware or software and facial recognition hardware or software to drones with surveillance cameras. BART will be on the hook to produce a “Surveillance Use Policy” and a “Surveillance Impact Report.”
Amazon, Microsoft, Facebook: All in the Facial Recognition Fray
In the summer of 2018, the ACLU used Amazon’s facial recognition application, Rekognition, in a test that comprised a database of 25,000 mugshots of criminals compared to the photos of the 535 members of the U.S. Congress. Much was made of the fact that 28 members of Congress were incorrectly identified as arrested individuals. Amazon pushed back, noting that the recommended settings for Rekognition were not used by the ACLU. The ACLU used the setting of 80 percent confidence rate, which would give on an estimated 5 percent false positive. Amazon ran its own testing, using a dataset of 850,000 faces and the same public photos of members of Congress, but adjusted the confidence rate to 99 percent. Amazon’s results had zero false positives, despite the fact that a far greater dataset had been used. “This illustrates how important it is for those using the technology for public safety issues to pick appropriate confidence levels, so they have few (if any) false positives,” Amazon said in a statement regarding the testing.
At Microsoft, the artificial intelligence (AI) group made available in early 2018 Microsoft Cognitive Services, which permit developers to add AI capabilities for vision, speech, language, knowledge and search into applications. The capability to train against proprietary data sets has tremendous potential in cataloging goods, as well as tightening security of physical environments. Their Face API allows million-scale recognition, which is a person group of up to 1 million people.
While Facebook, the entity which has billions of faces at its disposal, has been tweaking its ability to implement facial recognition in the face of a lawsuit playing out in Illinois. The Illinois case is a class action lawsuit that alleges Facebook mishandles biometric information in violation of the Illinois Biometric Information Privacy Act—Facebook has failed to disclose the methods, intentions and guarantees surround biometric data “such as a scan of a hand or face geometry,” according to the lawsuit.
Any user of Facebook who has posted a picture has seen their algorithm at work, as suggested “tags” pop up immediately as the image is scanned and compared to those who are Facebook users and in their circle of “friends.” Facebook explains in its late-2017 piece on whether facial recognition should be feared that the capability factors in those people the user has already tagged.
For those not paying attention, this feature was implemented some eight years ago, in 2010. Facebook noted that new techniques and ways to use the technologies are disclosed in the Facebook News Feed, which the company characterizes as the “doorstep” to Facebook.
Facial Recognition Growing
Use of facial recognition is only going to grow. Airlines have already implemented test projects that use the photo on passports as a way to screen passengers in lieu of boarding passes, while China and the United Kingdom are regularly demonstrating the value of ubiquitous surveillance of their urban environments and the use of facial recognition capability capturing criminals.
Recognizing that each country defines those engaged in criminal behavior differently, the reality is that facial recognition can be used to suppress civil liberties. One can expect civil liberties groups to try and keep the implementation transparent so that individuals’ privacy is also maintained.