GDPR: Privacy Uber Alles (Literally)

When the EU’s General Data Protection Regulation (GDPR) became effective, most companies, especially in the United States, had a  few simple thoughts. First, “Am I covered?” In other words, does GDPR apply to my activities, particularly in the United States? Second, “Am I compliant?” Again, if GDPR applies to what I do, am I complying with the new regulation? Both good questions. But not necessarily the ones I would ask.

The problem with how we look at GDPR, particularly in the United States, is that we see it as a “data breach” prevention regulation. We focus on the data security, privacy policy, breach notification, flow-down of contractual obligations and similar aspects of the new law. All are good, but philosophically they are many stages down the road from the core of GDPR.

GDPR as a Change in Philosophy

GDPR reflects a philosophy that data privacy about human beings is a fundamental human right. It reflects a liberty and freedom interest that humans have to not have unnecessary data collected about them, to not have a “profile” about them generated, to not have data used in a way they have not agreed to, and to not be reduced to a mere algorithm—particularly for important things.  It reflects the idea that humans have a right to control what is known about them. It’s not just about firewalls and breach notifications. It’s about dignity.

For entities attempting to comply (or have a defensible position) with GDPR and other privacy laws and regulations, the starting point is to ask simple basic questions. The first is, “What personally identifiable information do I collect or process?” And the second—and more important—is, “Why?” In addition, you should ask, “Does the data subject know I have this data and am using it in this way?” Am I collecting/processing more than I need? Keeping it too long? Using or sharing it for purposes other than the reason I collected it? A whole host of questions flow from treating private information as a human right rather than an asset of the collector. At the end of the day, you should be comfortable that the data subject knows—to a reasonable degree of certitude—that you have collected the data and that you are using it the way that you are.

Drunk Texting

Ride-sharing service Uber recently filed a patent on a technology that would allow the company to determine whether a passenger was drunk at the time they hailed a car.  First question. Why? Why would Uber care if the passenger was drunk? Would Uber as a company refuse to pick them up if they were drunk? Would certain drivers refuse to pick them up? Would Uber charge more for the drunken passenger, or less as a public service designed to help the drunken passenger get home?  Would Uber take other precautions (barf bag?) if they knew the passenger was drunk? Now I note that this is just a patent, and there’s no indication that Uber will implement this technology, but again, why? There may be perfectly good business reasons for wanting to know this information, but too often data analytics and inferences are collected and made just because—well, because it’s cool.

OK, so we have this awesome “drunk passenger” algorithm.  It works by examining a bunch of factors such as (I’m assuming; I haven’t read the patent application – TL:DR) date, time and location (3 a.m. on Friday night/Saturday morning in front of McSoorley’s Ale House), and other factors such as the rider’s age, other trips, poor typing or responses to feedback, slow typing or data input. The application states that it uses factors such as “data input accuracy, data input speed, interface interaction behavior, device angle, or walking speed, service data, time of day when the first user requests the service, or a day of the week when the first user requests the service, as well as Uber’s “history of previous interactions with users that had the same state” as the rider. In other words, Uber will collect tons of data about sober people’s riding habits to help determine whether the rider is sober or drunk.

Awesome sauce.

So, what’s Uber’s “legal basis” for collecting this data?  Uber’s privacy policy is broad and generic, giving the ride-sharing service the right to collect things such as “device data” and “user data” and  “information about how you interact with our services.” Does this clearly include things such as “data input accuracy, data input speed, interface interaction behavior, device angle, or walking speed?” On one hand, that’s interacting with the service. On the other hand … HOLY CRAP!!??!!

And that’s the problem with privacy and consent.  Google, Facebook, Uber or other companies will point to their privacy policy and say, “See, we told you we were collecting this data.” And lo and behold, there’s language in the privacy policy that is broad enough and vague enough to cover it.

For example, many automobile insurance companies have rolled out “real time” data analytics programs that will use either a device installed in your car or an app on your phone to gather data about your driving habits and use that data to “lower your insurance rates.”  Maybe. Or they can use that information to raise your insurance rates, or to deny or settle a claim based upon how you drive. Or to determine that you like to shop at expensive stores and therefore have a higher tolerance for rate increases. But what they are selling you is “lower rates.” They can’t be lower for everyone. Under GDPR, it’s not enough to have a privacy policy you can point to and say, “A-ha! If you read our policy carefully, you will see that you agreed to this.”

So when you call a merchant or customer service center, the call center will say “This call may be recorded for ‘quality assurance and training’ purposes.” I’m not sure what that means, so I am not sure what I have consented to. I do know that in Maryland, where I live, it is illegal (and $1,000 in damages) to record without consent. In theory, I have consented to the recording—but only if the recording is used for “quality assurance” or for “training.” If I have a dispute with a company and claim that I was told X, is the company’s use of the recording to show that I was told Y “quality assurance?” If the company said, “We are recording this conversation so that if you ever sue us, we can use it against you. Do you consent to the recording?” my answer might not be the same. GDPR, more than security and breach prevention, favors honesty and openness.

Companies need to take a step back and put themselves in the shoes of customers.  Would a reasonable person using the service really know that we are collecting and using this data in this way? When a chocolate from the Whizzo Chocolate Company says “crunchy frog,” would a consumer really expect to find a confection made with “only the finest baby frogs, dew picked and flown from Iraq, cleansed in finest quality spring water, lightly killed and then sealed in a succulent Swiss quintuple smooth treble cream milk chocolate envelope and lovingly frosted with glucose?” Reliance on broad language in a privacy policy may be insufficient to put a reasonable consumer on notice about what you are really doing.

This is particularly true when you start playing with the data. I may expect Uber to collect my location so the driver knows where to pick me up when I hail them. But to collect my data to profile me, to learn my habits and my blood alcohol content? Not cool, dude. Not cool.

And we get back to the question, Why?

Uber’s privacy policy indicates that the company will use data to “maintain the safety, security and integrity of our services and users” or “as requested by regulators, government entities, and official inquiries” or “for testing, research, analysis and product development.”  In the context of the patent, which does this fall under?

In theory, Uber could collect this information, and—if the rider were under the legal drinking age—use it to call the police to arrest the rider for underage possession of alcohol (yeah, many jurisdictions consider being drunk itself to be in possession of alcohol) or, irrespective of age, to be “drunk in public” (yes, a bar is “in public” as is a street corner waiting for an Uber). Even if Uber doesn’t share the data, law enforcement can compel the company to produce it. New slogan, “Call and Uber, Go to Jail.”

The patent reflects a philosophy prevalent in both Silicon Valley and elsewhere that data can be used to improve or “enhance” services offered without regard for the human rights consequences of collecting and processing that data. It’s often about what can be done with data rather than what should be done with it. That’s the philosophy that GDPR was intended to force companies to examine or re-examine.

It’s not that the end result will be different. We may find that finding a solution to drunk Ubering is a great idea. Maybe if you are drunk, Uber will arrange to pick you up on the east side of the street rather than the west, so you don’t have to cross the highway in your condition. Maybe you get a discount if you are drunk, to encourage you to Uber rather than drive. Maybe Uber will offer a “take you to your car in the morning” service if it detects you are drunk, so you can recover your now-abandoned vehicle. There may be great benefits to such a service. But I doubt that Uber users realize the granularity with which Uber is collecting their data and the intimacy of the use. And that’s what GDPR is really all about.

Now drive safely. And don’t eat the chocolate called “spring surprise.”

Mark Rasch

Avatar photo

Mark Rasch

Mark Rasch is a lawyer and computer security and privacy expert in Bethesda, Maryland. where he helps develop strategy and messaging for the Information Security team. Rasch’s career spans more than 35 years of corporate and government cybersecurity, computer privacy, regulatory compliance, computer forensics and incident response. He is trained as a lawyer and was the Chief Security Evangelist for Verizon Enterprise Solutions (VES). He is recognized author of numerous security- and privacy-related articles. Prior to joining Verizon, he taught courses in cybersecurity, law, policy and technology at various colleges and Universities including the University of Maryland, George Mason University, Georgetown University, and the American University School of law and was active with the American Bar Association’s Privacy and Cybersecurity Committees and the Computers, Freedom and Privacy Conference. Rasch had worked as cyberlaw editor for SecurityCurrent.com, as Chief Privacy Officer for SAIC, and as Director or Managing Director at various information security consulting companies, including CSC, FTI Consulting, Solutionary, Predictive Systems, and Global Integrity Corp. Earlier in his career, Rasch was with the U.S. Department of Justice where he led the department’s efforts to investigate and prosecute cyber and high-technology crime, starting the computer crime unit within the Criminal Division’s Fraud Section, efforts which eventually led to the creation of the Computer Crime and Intellectual Property Section of the Criminal Division. He was responsible for various high-profile computer crime prosecutions, including Kevin Mitnick, Kevin Poulsen and Robert Tappan Morris. Prior to joining Verizon, Mark was a frequent commentator in the media on issues related to information security, appearing on BBC, CBC, Fox News, CNN, NBC News, ABC News, the New York Times, the Wall Street Journal and many other outlets.

mark has 203 posts and counting.See all posts by mark