We are surrounded by technology. Our devices are always on and always connected, and therefore, so are we. We scroll through our smartphone notifications in the morning, listen to podcasts on our commute, spend an entire workday in front a computer screen, ending, finally, with the end-of-day ritual of watching the latest offering from our favorite streaming services.
This may sound like an ancient ritual, which highlights how rapidly we get used to new technology and take it for granted. Other than the work computer, the other activities barely existed just a decade ago. And those are only the connections we interact with directly. Our phones, cars, and homes are carrying on countless activities in the background, turning even hidden aspects of our lives into data.
It’s important to note how quickly we become complacent about things that would have seemed like science-fiction a generation ago. We need to look ahead to what it will mean if that familiar process happens with even more personal forms of technology. Despite digital tools’ ubiquity in our lives, many newer variants are only starting to take root and still have a long way to go before becoming entrenched and seamless parts of our routine. Take Amazon’s virtual assistant, Alexa. Nearly all of the device’s millions of satisfied customers has experienced a nonsensical response to a demand, a bungled calendar appointment or email message, or even the unsettling laughter many users have reported. These glitches make for entertaining anecdotes, and this tends to reduce—erroneously—concerns over the surveillance potential of these devices.
One of the most rapidly advancing fields in technology is biometric data collection and analysis, which I began discussing in my previous post. As I noted, innovations like home genetic testing kits are not yet fully reliable given the current state of the research. However, companies and governments, in the United States and abroad, are already implementing systems of biometric surveillance on a massive scale, and these systems will be ready to go when the technology catches up. The individual privacy concerns I’ve written about before pale in comparison to the threat of troves of centralized biometric data on workers and citizens. Just because these technologies are in early stages of development and often primitive doesn’t mean that we should discount future dangers. We must be prepared for dealing with the challenges of population-wide biometric surveillance before it becomes a reality.
Staying with Amazon, the company recently patented a smart wristband capable of monitoring the motion of a worker’s hands. The device is still only a concept and may never become a fixture of its workplace, but the ethical hazards it presents are still important to explore. In conjunction with the culture of brutal efficiency at Amazon’s warehouses (seven employees have died on the job since 2013), it is easy to see how such a device could turn exploitative, pushing workers to inhumane levels of productivity. It evokes the factory scenes of Charlie Chaplin’s Modern Times, with the human becoming the servant of the machine. And if a machine can so closely track how a job is being done, a machine should probably be doing the job in the first place! It’s better to be replaced by a robot than to be treated like one.
Walmart, a major competitor, is currently working on an audio surveillance system that will listen in on cashiers’ conversations with customers at checkout. It will be able to judge performance based on efficiency at bagging, how quickly the line moves along, and even the content of conversations. These types of developments give companies a tremendous amount of power over their employees. While such tools could be used to enhance productivity in a way that benefits employees and employers alike, they can also veer into dystopian territory. Do your rights to privacy and autonomy cease to exist the moment you punch the clock at work? The level of stress provoked by knowing you are under constant surveillance is difficult to fathom—at least for anyone who hasn’t lived in totalitarian state where such scrutiny was routine.
We are seeing a similar expansion of monitoring on the level of governments, with China serving as perhaps the most prominent example. Here again, some of the technology in play is somewhat rudimentary. A journalist who recently visited the central city of Zhengzhou had the chance to try out a pair of the facial recognition sunglasses used by police and concluded that they were, ultimately, not very useful. The glasses, connected to a small camera that is in turn hooked up by wire to a minicomputer, compare photos of individuals against a database of stored images, names, and national identification numbers. The process itself is still clunky, and, combined with people’s understandable evasion of the devices, the overall effect is disappointing. But imagine what small advances in speed, accuracy, and ease of use would do. (This isn’t limited to human faces, of course. The police in many countries routinely use license-plate scanners that automatically look-up and log every car in range.)
We must grapple with ownership issues, for one. Companies own everything employees do on their work computers, but do they own everything said out loud in the office? Biometric data is the next frontier of the battle for privacy. Can businesses make a case for using voice recognition data to monitor employees? Can governments use facial recognition software in a way that does not violate individual rights? Who has access to the data? How long is it stored, and by whom?
At the same time as we weigh the answers to these questions, we must be realistic about what we can legislate and what we cannot. Legislation such as the General Data Protection Regulation, which went into effect recently in Europe and which I covered here, are no doubt important. Expanding the framework to one level international playing field will further boost transparency and accountability. At the same time, the digital field is shifting rapidly, and it becomes impossible to play legislative catch-up every time a new tool hits the market.
Beyond specific regulations, then, it is crucial to stand for a strong international framework based on shared values and institutions to serve as legal and ethical precedents. Resilient alliances of democratic nations are the best bulwark against authoritarian governments, which are the governments, of course, that are most likely to use digital tools to surveil their citizens—and, more importantly, to use that monitoring to persecute and repress their citizens.
The only truly effective way to combat this type of Orwellian control, which is tempting for private companies and public institutions alike, is to bolster democratic governments and institutions. The structure and rhetoric of international institutions should reflect the rapidly changing technological landscape, and its relevance to the preservation of democracy. At its recent summit in Brussels, NATO allies set up a new Cyberspace Operations Centre that can draw on individual member states’ cyber capabilities for mutual protection. Words and task forces must be backed up by action, so we will see whether this step marks a substantive change, but it nevertheless sets a helpful tone. It says that the democratic nations of the world stand united in recognizing technological threats, and will work together to ensure that technology is used to promote human freedom, innovation, and flourishing—not to curtail it.
*** This is a Security Bloggers Network syndicated blog from Blog | Avast EN authored by Avast Blog. Read the original post at: https://blog.avast.com/big-brother-is-watching-listening