We’re just a month and change into the new year, and already there have been two notable developments underscoring the fact that some big privacy and civil liberties questions need to be addressed before continuing the wide-scale deployment of advanced facial recognition systems.
This week civil liberties groups in Europe won the right to challenge the UK’s bulk surveillance activities in the The Grand Chamber of the European Court of Human Rights.
Related: Snowden on unrestrained surveillance
“The surveillance regime the UK government has built seriously undermines our freedom,” Megan Golding, a lawyer speaking for privacy advocates, stated. “Spying on vast numbers of people without suspicion of wrongdoing violates everyone’s right to privacy and can never be lawful.”
That development followed bold remarks made by none other than Microsoft CEO Satya Nadella just a few weeks earlier at the World Economic Forum in Davos, Switzerland.
Nadella expressed deep concern about facial recognition, or FR, being used for intrusive surveillance and said he welcomed any regulation that helps the marketplace “not be a race to the bottom.”
You may not have noticed, but there has been a flurry of breakthroughs in biometric technology, led by some leapfrog advances in facial recognition systems over the past couple of years. Now facial recognition appears to be on the verge of blossoming commercially, with security use-cases paving the way.
Last November, SureID, a fingerprint services vendor based in Portland, Ore., announced a partnership with Robbie.AI, a Boston-based developer of a facial recognition system designed to be widely deployed on low-end cameras.
The partners aim to combine fingerprint and facial data to more effectively authenticate employees in workplace settings. And their grander vision is to help establish a nationwide biometric database in which a hybrid facial ID/fingerprint can be used for things such as fraud-proofing retail transactions, or, say, taking a self-driving vehicle for a spin.
However, the push back by European privacy advocates and Nadella’s call for regulation highlights the privacy and civil liberties conundrums advanced surveillance technologies poses. It’s a healthy thing that a captain of industry can see this. These are weighty issues over which we waged two World Wars last century.
Always-on sensors have become ubiquitous to the point of being largely unnoticed in this century. But advanced FR systems introduce a critical nuance. Here’s how Jay Stanley, senior policy analyst for the American Civil Liberties Union, described it for me:
“Right now everybody knows that when you walk down the street you’re recorded by a lot of video cameras, and that the video will just sit on some hard drive somewhere and nothing really happens to it unless something dramatic goes down. The ultimate concern with this technology is that we’ll end up in a surveillance society where your I.D. is your face, and everybody is checking on you at every moment, monitoring you.”
It’s now commonplace for high-resolution video cams to feed endless streams of image data into increasingly intelligent data mining software. Along with this comes the rising potential for abuse of the technology. “We’re talking about an enormously powerful surveillance capability that no government has ever had in the history of humanity,” Stanley says.
These privacy and civil liberties questions need to be resolved for the greater good, to set a baseline for ethically tapping the benefits of this advanced technology.
Advanced use cases
Some of the most interesting advances are unfolding in the area of identifying individuals acting naturally in front of a surveillance camera. Robbie.AI, for instance, is honing a system tuned to recognized human emotion.
“Your face provides strong biometric cues, even if you dye your hair,” says Karen Marquez, Robie.AI’s chief executive officer. “Iris and retina are somewhat intrusive alternatives, as you need to place yourself close to the sensors, and that’s not a natural.”
Another example comes from Seattle-based tech company RealNetworks, where Mike Vance, senior director of product management, has received dozens of recent queries from K-12 schools across the nation seeking to participate in RealNetworks’ Secure, Accurate Facial Recognition (SAFR) program.
SAFR was rolled out with little fanfare at two Seattle pilot schools about a year ago, in early 2017. It combines commodity video surveillance cameras and PCs with facial recognition software supplied by RealNetworks. The system instantly recognizes teachers, administrators and parents. It open security doors for them and alerts security officers whenever a surveillance camera catches sight of an unauthorized adult on school property.
“The level of accuracy that we, and others, have been able to achieve far surpasses what was possible three years ago,” says Vance. “We can now tell you whether or not somebody who’s in front of a camera is who they’re asserting to be. We can find them out of millions of people in a database in a fraction of a second.”
Robie.AI and RealNetworks are by no means alone pushing facial ID systems into the commercial market. Google, Apple, Facebook and Microsoft have poured vast resources into theoretical research in the related fields of artificial intelligence, image recognition and face analysis. And the tech giants have openly shared key findings intending to accelerate the entire field.
The first generation of facial recognition systems actually have been in wide use for years at airports and border crossings, used primarily by border control officers and law enforcement agencies to catch criminals and deter terrorists. Their use for security access in other public settings, such as schools and workplaces, appear to be part of a natural progression.
RealNetwork’s system, for instance, derives from the streaming technology it pioneered for media players in the 1990s, combined with images amassed via its RealTimes free app that let’s users build photo slideshows. Customers photos and videos were used, with their permission, to train RealNetworks’ facial recognition engine, which maps 1,600 data points for each face.
SAFR is tuned to identify people walking past a video cam who aren’t looking squarely at the lens. It can delineate a variety of skin tones and distinguish nuances based on gender, age and geography.
“The algorithm that we’ve developed really relates back to our expertise from the 1990s of being able to scan video,” Vance explains. “We were able to operate in extreme conditions back then, with not a lot of bandwidth to work with . . . we developed technologies to pick the right image out of a stream of video to compare against a database.”
It is become much clearer how facial ID systems hold the potential to be used much more routinely in secure access and law enforcement scenarios. And as public acceptance spreads, biometric innovations, pivoting off of facial IDs, are likely to utilized in retailing, public transportation and even healthcare, to do things like support a patient’s pain management routine and even detect genetic diseases.
The partnering of SureID and Robbie.AI embodies the path many experts believe lies ahead for commercial uses of the coming generation of facial recognition technologies. By integrating Robbie.AI’s leading-edge facial ID technology with SureID’s network of fingerprinting kiosks, now used to authenticate employees, the partners are taking aim at a sky high goal, Marquez says.
They hope to supply the building blocks for a nationwide biometrics gathering system — one that can be widely shared to support broad consumer-focused initiatives, much as the tech giants shared results of their theoretical studies.
“This partnership can be a huge first step in developing holistic human biometric solutions that can protect us all against spoofing, impersonation, fraud and cybercrime,” she says. “This includes everything from replacing logins, passwords and registration codes to responding to customer issues the moment they occur.”
Marquez envisions a hybrid facial ID/fingerprinting system capable of alerting victims, in real time, as they are being targeted online by fraudsters. Other obvious use cases would be to provide real-time authentication to access autonomous vehicles or to control IoT devices in a smart home.
“Facial recognition and fingerprinting technologies have been around for years and if used correctly they are more secure than any written credential,” she says.
With wider commercial use comes the potential for those in power to abuse the technology. And privacy advocates need to look no further than China to see the slippery slope unfolding.
China’s President Xi Jinping has been moving aggressively to possess moment-to-moment surveillance and assessment capability over Chinese citizens. China has rolled out a national surveillance network comprised of 200 million cameras, roughly four times the number in the U.S., and plans to have 300 million cameras in place by 2020, according to the New York Times.
China says it is using this surveillance net to track down criminals and scofflaws, including jay walkers, whose punishment is to have their faces displayed on giant outdoor digital screens alongside lists of names of people who don’t pay their bills.
Marquez, the Robbie.AI chief executive, agrees that well-defined limits are in order. “Companies must lead the process by being transparent,” she says. “Facial recognition by itself can be a major advancement in data analysis and consumer protection in so many areas. Understanding the benefits and defining a framework for respecting civil rights is essential.”
(Editor’s note: Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.)
*** This is a Security Bloggers Network syndicated blog from The Last Watchdog authored by bacohido. Read the original post at: https://www.lastwatchdog.com/my-take-why-satya-nadella-is-wise-to-align-with-privacy-advocates-on-regulating-facial-recognition/