Recently, I was reading about the U.S. government considering funding a “moonshot” information security project—that is, like the efforts in the 1960s to reach the moon (before the Soviets), abandon incrementalism in information security and try the impossible (or nearly so.) So, this is where the audience participation part of the program begins: If you could do anything in InfoSec, what would you do? As I pondered this, I realized that many of the problems in information security—and privacy as well—are not technological but definitional. Many of the things we avidly want to do are diametrically opposed by some of the other things that we want to do. The fault lies not in our stars but in ourselves.
Passwords need to die. No, they need to be murdered in the beds. We need to have a much better form of authentication and access control. The iPhone X model might work in most environments—you simply approach the hardware, it authenticates you to the device and the device authenticates you to, well, everything else. True single sign-on with a biometric authenticator. Great. But biometrics are subject to spoofing. Unlike passwords, you can be compelled to produce a biometric (well, maybe unlike passwords—courts are split on this). But in a moonshot environment, you want to just use your devices without doing anything.
On the flip side: Strong authentication means strong attribution. Strong attribution means loss of privacy. Everything you do is attributed back to you, which makes it a great tool for profiling, data collection, data analytics, etc. So we need to also have strong anti-attribution tools, anonymization, pseudnonymization and the ability to use the web in a way that does not attribute back to us. And that has to be built in as well.
Data classification is the bane of any chief information security and compliance officer (CISCO)’s existence. In fact, although we pay much lip service to it, we really don’t do it. At all. Sure, we create all kinds of classes of data—privacy data, PHI, PII, confidential data, intellectual property, proprietary data, tax information, financial records, etc.—each with its own level of security, access control and time to live (and die). But we rely on individual users to classify (and protect) their data or crude broad categories or weak tools to identify the data. Moreover, our data classification and protection schemes only work in our own environments. When this protected data migrates outside of our domains or networks, the data loses whatever protected classification it may have once had. It’s just data at that point. Ask Edward Snowden.
A moonshot data classification program would granularly classify data at the time of creation. And by granularly, I mean each paragraph and each sentence therein might get a different classification—you know, like in the intelligence community (well, at least, how it is supposed to work in the intelligence community). Remember, this is a moonshot, right? The data classification would be based on the identity and role of the creator, the identity of the intended audience, the topic and the data itself. A memo to or from counsel about an issue related to legal advice is presumptively marked Attorney Client Privileged. An engineering report about a new product is marked Proprietary and Sensitive. An internal report on a personnel issue is marked Confidential and PII. And so forth. No muss, no fuss.
Data classification would cross boundaries as well. The data (or file) would determine who could access the data and for what purposes. It would contain controls on forwarding, copying, etc. The data also would have embedded rules on encryption, access control and time to live.
Now, let’s look at the flip side: Welcome to the law of unintended consequences. If the embedded data has a time to live, then in theory it cannot be resurrected. That means that critical data might be unavailable because a program said so. The system would have to accommodate things such as litigation holds, special circumstances and cascading and overlapping authorities to need to access data. If we knew at the time of file creation every person and every purpose for which that individual file might ever be needed (and for how long), we might be able to automate the process. Otherwise, we are rigidly automating an ill-defined process.
Death of the Network
In my moonshot (yours may differ), the corporate network would disappear. The network is currently like a medieval village—a self-contained enterprise with a (poorly) defined perimeter, authorized and unauthorized users, a castle, a wall, a moat and guards. Good guy in, bad guy out. The corporate (or government) network is based on the idea that the employee comes to the network; is provided a network connection, access to that network and a predefined set of network resources by the employer; and is also provided access to the interwebs by that employer, who monitors and controls everything that employee does (except for downloading phishing attacks, which apparently are done at will). That model is soooo ’90s—either 1990s or 1590s.
With the introduction of ubiquitous high-speed wireless connections to portable devices (and I mean 5G and above, not WiFi) the model changes. Users will access the web first and from the web, access corporate data and resources, which will not be on corporate networks per se but on some shared resource such as the cloud. Access controls and monitoring will (may) migrate from some ill-defined perimeter closer and closer to the data level itself. Individual packets of data may be anywhere at all—or many places at the same time. Flexible, expandable, ubiquitous and accessible from anywhere and any device. Awesome.
Of course, that creates a whole host of other problems. Plus, geography—or data sovereignty—is currently important. It may not be in this moonshot environment, but then again, it may be.
The principal means we have of security the contents of data remains cryptography. There are a lot of problems, practical and theoretical, with how cryptography works and with how it is deployed. The most basic problem is that encrypted files or communications still have to be accessed by “authorized” persons. So the strongest encryption, relying on impossibly long prime numbers and other cryptographic functions (hardware and software) still need to be accessed—currently with some passphrase or passcode (or biometric). The strongest lock can still be bypassed with the KIFD (Kick In the Freakin’ Door) protocol.
The other problem is that current cryptography provides protection against current technological attacks. Quantum computing and other moonshot technologies provide both newer protections and newer attack parameters. Imagine trying to use the Sopwith Camel to defend against an attack by an F-35. Just remember, “Don’t be too proud of this technological terror you’ve constructed.” Time, is on my side, yes it is.
Forget Me Not
We want unneeded data to die. It’s more secure that way. Unless we need it. Then we need to be able to restore it. Unless we don’t need it. Then it needs to die. It’s the same thing with government access to the data. We want to keep the government out of our systems (all governments). If we are the government, we want to keep other governments out of our systems. Unless there’s a real need for them to get the data (such as a warrant or to save lives). Then we want them to get the data—and quickly —but only the data they need. Easy peasy, lemon squeezy.
Backup and Restore
True InfoSec is about both data protection and data availability (and integrity). You want to make sure that, even if there is an attack on infrastructure, the data is both survivable and accessible. Which means—in the current parlance, at least—data backup, hot sites, warm sites, archival and retrieval. Which also means multiple copies of the data. Which means multiple attack parameters and multiple points of defense. If we move security from the network to the data, then it doesn’t matter where the data is, or how many copies there are, right? We’ll see about that.
With InfoSec, the problem typically is not with the silicon, but with the carbon. Like soylent green, it’s people! And most of the time people are doing what CISOs consider to be “stupid” things—they are circumventing or ignoring controls, typically (but not always) to do what they think is their job. They clicked that phishing attachment because it looked like it came from their boss. They needed to access that website to do online banking (or sports betting, or whatever) so they could stay late at the office and do other work. You get the idea.
And you can’t make systems idiot-proof. At best, you get more-clever idiots. Any moonshot approach has to remember that the system is designed for humans to operate. Remember the Mars lander, which failed to convert from imperial measurements to metric and crashed on the surface of the red planet? It was designed by the same engineers who did the moon shot. (“Motto: An Ounce of Prevention is Worth a Kilogram of cure.”) So keep that in mind with your moonshot.
So, those are my moonshot security goals. I’d be interested in yours. Post comments or just send smoke signals. Until then, Houston, we’ve got a problem.