Cyberlaw

The Symbiotic, Parasitic Relationship Between Privacy, Security

Increasingly, at least in the law, privacy and security are diverging. There are cybersecurity lawyers who specialize in forensic investigations, data breaches, security regulatory compliance and ensuring that contracts protect the security of data and networks. Then there are privacy lawyers who specialize in drafting privacy policies, ensuring privacy by design and compliance with laws such as GDPR and CCPA. In organizations, the CPO (if there is one) and the CISO (again, if there is one) have overlapping but distinct roles, with the CISO reporting alternatively to the CIO, the CTO, the risk officer or the General Counsel and the CPO sometimes (but not always) performing a “legal” or “compliance” function and reporting to legal. Lawyers who practice privacy law meet at IAPP (International Association of Privacy Professionals) meetings, while those who practice security meet at RSA or DefCon. Different crowds. Different concerns. But a good deal of overlap.

It has long been the case that good security is essential for good privacy. However, a not-so-recent trend illustrates that, from a technical, legal and compliance standpoint, the tools, techniques and technologies that we use for security are, in both the short and long run, ultimately destructive of meaningful privacy. Unless we do something smart soon, we will lose both privacy and security.

In a letter on behalf of the Pennsylvania General Assembly, Benjamin Franklin famously noted that “those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety.” A more apt paraphrasing for modern times might note that “those who would give up essential privacy to purchase a little temporary security deserve neither privacy nor security.” And they shall have neither.

Security is Essential for Privacy

Now, don’t get me wrong. I’m all for security. And I’m all for privacy. But when it comes to “privacy,” in a very real sense, “I do not think that word means what you think it means.” Security—that is, technologies and processes that are essential to both let “good” guys in and keep “bad” guys out (and keep good guys from doing bad things, whether deliberately or inadvertently)—is essential to protect privacy. Databases that are not secure are not “private.” An essential component of data privacy is ensuring that data collected is accessed and used only for the legitimate purposes for which it was collected (hopefully with knowledge and consent of the data subject) and that it is secured from both improper and unauthorized use and dissemination. And for that you need security. You need access control. You need authentication. You need logging. You need monitoring. You need intrusion detection. You need intrusion prevention. You need anti-exfiltration technologies. In a very real sense, you cannot have privacy without security.

Think of your “private” papers at home. If you can, you put them in a safe or lockbox because, well, they are your “private” papers, right? By definition, to keep them private you try to make them secure. Indeed, laws that regulate privacy almost always mandate security, and laws that mandate security generally mandate it around private information. GLBA has a “security” rule and a “privacy” rule. HIPAA, a medical privacy rule, requires both covered entities and their business associates to have reasonable security. Data breach laws in most states may be accompanied by laws that further mandate reasonable security. The FTC Act (which says nothing about either privacy or security but addresses “deceptive trade practices”) has been interpreted by both the courts and the FTC (to more or less degree) as requiring certain regulated custodians of consumers’ personal information to both respect the privacy of that data and to secure it. As Benjamin Franklin once noted, “Three may keep a secret, if two of them are dead.” So, privacy and security are flip sides of the same coin, right?

The Good, the Bad and the Ugly

In many ways, security is the enemy of privacy. To secure a database, for example, you need strong authentication and strong access control. You need to know who is accessing data and what they are doing, which means strong attribution and continuous monitoring as well. From a privacy standpoint, this means that you need to collect, store and maintain records of identity. You need the equivalent of someone’s birth certificate, passport, driver’s license, DNA, biometric and authorization (employment record, current employment status, agency, department and level) and, if you’re doing data classification and segmentation, the authority to access specific documents and files before you let them in. That’s fine from a security standpoint. Strong attribution is neither good nor bad. But, from a privacy standpoint, it means that everything someone does is capable of being attributed back to them.

Think of this in the “real world.” Every purchase, every communication, every conversation with a friend, every medical treatment, everything you watch, read, touch or enjoy, is collected and attributed back to you. It is the antithesis of privacy. It fails to, as the U.S. Supreme Court noted in a 1965 case about contraceptives, “enable[] the citizen to create a zone of privacy which government may not force him to surrender to his detriment.” Strong attribution, essential for security, is an anathema to privacy.

The same is true for monitoring, particularly cross-platform and automated monitoring of the kind that is common for security. From a security perspective, we are constantly looking for “unusual” behavior: people logging in at unusual times; people elevating privileges; people looking at files or databases that they don’t usually look at; people logging in from unique IP or MAC addresses. That’s all fine from a security perspective.

From a privacy perspective, this means we are constantly collecting data about specific individuals’ activities—often intimate details—and using them to create a profile of them. What time do they wake up in the morning? When do they use their computers or other devices? How do they interact with various IoT devices such as Nest, Alexa, Google Home, etc.? Keystrokes may be logged and monitored for “unusual” typing patterns. Users are profiled, analyzed and described. Big data and AI analytics are taking this profiling to new and potentially dangerous levels. Combining databases makes the problem worse.

Employers may not only monitor their employees’ actions at work (or from home if they work from home), but they can monitor social media (in many but not all states), purchase access to credit, criminal history or other profiling databases and link them together in the name of “security.” AI programs are really good at looking for patterns in this mass of data and promise to help companies identify “problematic” employees or customers. Indeed, a computer program available to physicians helps them identify (and reject as patients) individuals who are likely to sue in the event of malpractice (lawyers, for example).

We use computerized predictive models to determine criminal sentences based on an algorithm that predicts “future dangerousness”; we use algorithms to determine where to place surveillance cameras and police and to determine mundane things such as credit scores or insurance rates as well as patterns of “fraud” and “crime.” In many ways, these become self-fulfilling prophesies: We deploy surveillance technologies where there is likely to be “crime” and arrest those we observe using these surveillance technologies, reinforcing the fact that there was “crime” there. However, because of the perception that the other areas are “low crime,” we don’t deploy police or surveillance to those areas, and therefore don’t make arrests, reinforcing the statistics indicating that these areas are, in fact, “low crime.”

Security technologies such as Clearview’s facial recognition software also promise to make us safer by allowing cameras to be linked to databases and to identify wanted criminals, terrorists or other “bad” people. Facial recognition and biometrics are, in the physical world, what attribution technologies are in the virtual world. They begin with the assumption that there is no right or possibility of anonymity and that not only should a person be able to be strongly attributed in everything they do, but also that databases of what they are doing can and therefore should be linked so that we can create a behavioral pattern. For years, facial recognition has been attacked because it may not work as promised and may result in false attribution, particularly with people with dark skin. What is more dangerous is that it may (and ultimately will) work as promised. It’s one thing to have a camera in a public place where there may be a security incident (e.g., the Superbowl) and to scan faces for individuals for which there is an outstanding warrant for arrest for violent criminal behavior (and, please, not just parking tickets) and then immediately delete the records of everyone who doesn’t match. It’s another thing to scan and retain records of everyone’s activities (as we ultimately will do), link them to massive databases of identity and activity (including social media) and then use AI to profile these people.

Security? Maybe. Privacy. Not so much.

Ethics

One of the problems with the current Silicon Valley (and Washington) mentality is that we are constantly examining what can be done with technology rather than what should be done. We ask whether something is legal (sometimes) rather than whether it is right or moral. We look at the short-term benefits while ignoring the longer-term problems. We look for short-term economic advantages (can we monetize this) and fail to look at the overall impact of the technology. Once we establish a constituency for the privacy-invading data (whether it’s a company selling a product or a law enforcement or intelligence agency), it becomes almost impossible to reverse that constituency. The concepts of “private” and “public” data are blurred. Automated license plate readers (APLRs) allow police to give out speeding tickets or trace stolen cars, and also allow bounty hunters and repo men to track scofflaws, but they also allow ex-boyfriends to stalk, intel agencies to surveil and competitors to gather competitive intel.

Security is essential to protect privacy. But we need to be wary about sacrificing privacy to obtain security. Because even if we don’t see an impact to us personally right now, ultimately the massive data collection and use will come back to haunt us. As a wise person once said: “Justice will not be served until those who are unaffected are as outraged as those who are.”

Oh, and that wise person was Benjamin Franklin.

Mark Rasch

Mark Rasch

Mark Rasch is a lawyer and computer security and privacy expert in Bethesda, Maryland. where he helps develop strategy and messaging for the Information Security team. Rasch’s career spans more than 35 years of corporate and government cybersecurity, computer privacy, regulatory compliance, computer forensics and incident response. He is trained as a lawyer and was the Chief Security Evangelist for Verizon Enterprise Solutions (VES). He is recognized author of numerous security- and privacy-related articles. Prior to joining Verizon, he taught courses in cybersecurity, law, policy and technology at various colleges and Universities including the University of Maryland, George Mason University, Georgetown University, and the American University School of law and was active with the American Bar Association’s Privacy and Cybersecurity Committees and the Computers, Freedom and Privacy Conference. Rasch had worked as cyberlaw editor for SecurityCurrent.com, as Chief Privacy Officer for SAIC, and as Director or Managing Director at various information security consulting companies, including CSC, FTI Consulting, Solutionary, Predictive Systems, and Global Integrity Corp. Earlier in his career, Rasch was with the U.S. Department of Justice where he led the department’s efforts to investigate and prosecute cyber and high-technology crime, starting the computer crime unit within the Criminal Division’s Fraud Section, efforts which eventually led to the creation of the Computer Crime and Intellectual Property Section of the Criminal Division. He was responsible for various high-profile computer crime prosecutions, including Kevin Mitnick, Kevin Poulsen and Robert Tappan Morris. Prior to joining Verizon, Mark was a frequent commentator in the media on issues related to information security, appearing on BBC, CBC, Fox News, CNN, NBC News, ABC News, the New York Times, the Wall Street Journal and many other outlets.

Recent Posts

What is Penetration Testing: A comprehensive business guide

Penetration testing, or pen testing for short, is a critical way to protect IT systems and sensitive data from malicious…

6 hours ago

Best Practices to Strengthen VPN Security

Virtual private networks (VPNs) form a staple of the modern work environment. VPNs provide an essential layer of protection for…

6 hours ago

Cradlepoint Adds SASE Platform for 5G Wireless Networks

Cradlepoint, a unit of Ericsson, today launched a secure access service edge (SASE) platform for branch offices using 5G wireless…

6 hours ago

BTS #28 – 5G Hackathons – Casey Ellis

Casey recently was involved in an event that brought hackers and 5G technology together, tune-in to learn about the results…

7 hours ago

CCPA Compliance with Accutive Data Discovery and Masking: Understanding and protecting your sensitive data

What is the CCPA, the California Consumer Privacy Act? CCPA, or the California Consumer Privacy Act, is a law in…

7 hours ago

USENIX Security ’23 – Token Spammers, Rug Pulls, and Sniper Bots: An Analysis of the Ecosystem of Tokens in Ethereum and in the Binance Smart Chain (BNB)

Authors/Presenters: *Federico Cernera, Massimo La Morgia, Alessandro Mei, and Francesco Sassi* Many thanks to USENIX for publishing their outstanding USENIX…

10 hours ago