The purpose of security is to allow the “right” people to have access to data and resources and to keep others out. It is ultimately about having control over data and data processing and enforcing decisions about who gets access to what. In a hospital, for example, good security ensures that doctors, nurses and other practitioners (and those responsible for billing and administration) have the ability to get and use patient records, all while preventing that information from being accessed or used for other purposes.
A recent criminal case in San Diego tests the limits of security. The U.S. Department of Justice (DoJ) has charged Vincent Ramos, whose Canadian-based company Phantom Secure “sells electronic communications devices and encryption service to transnational criminal organizations to facilitate illegal activity and obstruct and impede law enforcement.” The charging document further alleges that the Phantom Secure network and devices, which reside on modified Blackberry handsets, used encryption to “prevent law enforcement from intercepting and monitoring communications on the network,” and that the security and encryption services were set up “specifically to facilitate criminal activity and obstruct, impede and evade law enforcement.”
The government cited as evidence of Phantom Secure’s alleged criminality the fact that it used AES encryption and PGP, that it routed all services (data, text, GPS and voice) through an encrypted server, and that it used both multiple proxy servers in general and proxies in Panama (which, according to Phantom Secure’s marketing materials “does not cooperate with any other country’s inquiries [and] does not consider tax evasion a crime.”). Phantom Secure, according to the charging document, also does not simply accept new customers but requires them to be vouched for by other existing customers to prevent intrusions onto their “secure” network, keeping it a closed network available only to the community created.
Phantom Secure also offers “remote wipe” services, whereby a compromised device can have the data on the device erased “if the device is seized by law enforcement or otherwise compromised.” The indictment goes on to list some of the crimes that others committed using these secure devices and alleges that there exists the “Phantom Secure Enterprise” made up of the device manufacturer and all of its users who paid the company money to facilitate their criminal enterprise.
Or, in other words, to provide security.
And that’s the problem. A secure device, a secure network, a secure communications channel, a secure storage mechanism, is designed to allow the user to control who can access the data or communications and prevent persons not authorized by that user (or employer, etc.) to access the data or service. That’s what security does. It’s very dangerous to outlaw technology itself—particularly when it is the same technology that protects the security of everyone’s daily communications, banking transactions, health records and just about everything else.
Part of this problem has to do with the government’s (and, indeed, any government’s) definition of “unauthorized access.” Clearly, if law enforcement has a warrant or other lawful court order, it has the authority to intercept and read communication, extract and read files or otherwise access data. But that does not mean that it has the ability to do so. Technical measures designed by the user to control access to data essentially mean that they control access to data, even by third parties with lawful authority to access that data. That’s what security does, and that’s what security is intended to do. In fact, if I were trying to design a secure communications network, I would likely do most (if not all) of what Phantom did: hardwired handsets with device-to-device encryption and authentication on a closed and independently authenticated network using proxy and TOR services routed through countries that don’t cooperate easily with each other on sharing data. I might also add things such as IP and number spoofing, GPS spoofing, dynamic hopping, biometric authentication … you know, security.
So, dusting off my prosecutor hat, I can understand the government’s frustration. If I was the head of a drug cartel, an international arms dealer, a terrorist, a spy, a child abductor or a tax evader or money launderer, I would want to get me some of that Phantom Secure. There’s little doubt that the security of these devices help facilitate criminal activity. Vincent Ramos likely knew that, and didn’t care. As Harvard mathematician and 1960’s comedian Tom Lehrer once wrote about German missile scientist Dr. Wernher Von Braun, who led the Nazi V2 and slave labor programs until surrendering to the Americans and turning his scientific skills to the U.S. cold war missile program and then NASA, ‘Once the rockets are up, who cares where they come down? That’s not my department!’ says Wernher von Braun.” In other words, “I just make the crypto. How you use it is up to you.”
So, is the designer or seller of security technology legally responsible when third parties use that technology as intended, but to facilitate secure communications for criminal purposes? Is TurboTax responsible when someone uses its software in furtherance of tax fraud? Is Google responsible for poisonings when someone uses its search engine looks up how to poison? Is Craigslist or Backpage responsible when someone uses their services in furtherance of human trafficking or sex crimes? The answer is: It depends. And that’s the problem.
If an Uber driver picks up a passenger in front of a bank and drives them to another location, the company is not liable for the bank robbery unless it was either were part of the planning or execution of the robbery, or knew or should have known that the passengers were facilitating the robbery (the horizontal striped shirts, black mask and canvas bags with dollar signs on them might be a hint). On the other hand, if the company advertises, markets and promotes “bank robbery getaway services” (complete with John Spenser Blues Explosion’s “Bellbottoms” on iPod) then it has liability. So you can intend to facilitate criminal activity, actively seek to facilitate, conspire with others to commit crimes or aid and abet crimes you know about, or actively conceal crimes that have already occurred (criminal facilitation, attempt, conspiracy, aiding and abetting, material support, obstruction of justice, accessory before or after the fact). But each of these requires some form of knowledge and intent to facilitate a specific crime. On the other side of the ledger, federal law and regulation make it illegal to manufacture, sell or distribute “any electronic, mechanical, or other device, knowing or having reason to know that the design of such device renders it primarily useful for the purpose of the surreptitious interception of wire, oral, or electronic communication.” So, if you know or have reason to know that your software makes it primarily useful for facilitating or concealing crimes, should that be illegal?
So, here are several theories of criminal liability (in roughly descending order) for Phantom Secure. First, the company intended that its product be used for crime. Second, it actually intended that crimes be committed and conspired with the criminals to assist them. Third, its product has one use and one use only (or primarily): to facilitate crime. Fourth, that even if its product has multiple uses, it knows that the vast majority of its users are using it for criminal purposes, and it knows or should know that the design makes it primarily useful for crime. Fifth, even though the product/service has lawful and unlawful purposes, the company markets, sells and promotes the use of the product for criminal purposes. Sixth, that the product has both criminal and non-criminal uses, but the company knows that some people use it for criminal purposes, and therefore it is liable for the crimes committed using its product. And finally, some form of strict liability: If your product is used for crime, you have a duty to prevent it, and therefore you are criminally liable for others’ crimes. The problem with the charging document is that the government doesn’t really settle on any of these theories; it kinda mashes them together, relying mostly on the fact that the product is advertised as being secure and that it is being used by criminals, but not that the creator intended that it be used that way.
This indictment presents genuine problems for “legitimate” security software, products and consulting. Sure, if you provide security consulting services and are hired by MS-13 to secure its data and services, you should probably think twice from both a legal and personal security standpoint. But should PGP be liable for selling encryption software that an MS-13 member then uses? Is designing a hackproof (or hack-resistant) communications network a crime if it can be used to thwart law enforcement? Cops have long wanted a “hacker tools” offense for selling or distributing code that can break into computers; this case represents one where the cops also want to make it a crime to prevent such break-ins.
In a public forum I can only raise the questions. For free. You want answers? That’s gonna cost you. Just note that while this case is limited to a company that the government alleges should have known was facilitating crime, the next time it could be you.