The promise was that cloud computing would simplify enterprise business-technology. Enterprise users would be able to focus on their applications and services while leaving the deeper security issues associated with infrastructure and secure delivery and management of applications to the cloud provider. It didn’t exactly turn out that way.
The shared responsibility model (as termed by Amazon, which details how security of the cloud is the responsibility of the provider and security in the cloud is the responsibility of the consumer) works well when applied to specific cloud services. But the sheer acceleration of cloud use and diversity of cloud services, coupled with how cloud has helped transform how applications are built and deployed, in aggregate, enterprise technology environments have never been more complex than they are today.
This increase in complexity was a topic recently tackled at the 2018 Qualys user conference by 451 Research analyst Scott Crawford. The IT world is a lot more complex than it was a decade ago. There has been a proliferation of endpoint form factors, multiple cloud infrastructure services, dozens of cloud software services, software containers, and virtualized workloads. And the old stalwart software development process known as waterfall has been displaced by continuous delivery.
Cloud success bred complexity
Also, not surprisingly, as cloud became less expensive and easier to deploy, enterprises deployed more cloud services. As Crawford pointed out, as the unit cost and ease of access to cloud decreased, adoption rose. Aggregate total cloud costs — and likely aggregate complexity — rose along with it.
The low cost and easy deployment are why cloud has proliferated throughout the enterprise — whether it be cloud infrastructure services, software-as-a-service applications, containerization, virtualization, or serverless services.
As Crawford highlighted, all of this is the continued abstraction of IT from physical infrastructure and familiar enterprise technology constructs, such as operating systems and servers. “We’ve seen the rise of things like containers, which really didn’t begin as a way to optimize IT automations, but as a way to make application development and deployment a lot more reliable. If you can package the dependencies in an application in one package, you didn’t have to worry about dependencies, all you had to worry about is the container,” Crawford said.
In many ways, that makes availability and testing much easier. For instance, in on-premises dedicated environments it was next to impossible to get a proper test environment with all the right components that were identical to production. Because it was so hardware oriented it was cost prohibitive. The setups were also extremely challenging to keep identical, as changes in production had to be mirrored perfectly in the test environment.
“With containers we don’t really have to deal with that anymore. And we can move containers fairly easily across environments. In fact, containers provide the opportunity to actually move computing environments from one service, one platform, to another,” Crawford said.
The tradeoff for all of this flexibility is a loss in visibility and manageability of applications and data across systems. But there’s a benefit here, too: security is more programmatic than ever and security teams can automate things that required direct human involvement in the past.
“Security is no longer fully constrained by the legacy physical environment. Enterprises are free to deploy infrastructure as software, which means they can make it more highly available. They can do service and remediation of issues in a way that isn’t disruptive to business. They can roll changes out progressively. They can define an immutable environment, an environment once it is defined until changes are needed,” Crawford said. “There are a lot of security benefits here.”
Those security benefits are in addition to the speed and scale of associated with cloud, and include the automation possible with continuous development, continuous deployment, and continuous integration — and even continuous security for the most mature organizations.
No quick fix
Eventually, that should solve all of our problems? Not so fast. “One of things that you have to keep in mind is that with this pace of innovation, all that is being developed, are we at risk of creating something we can’t manage and secure? Do we have so much automations, do we have so much innovation, so many places where we have to pay our attention, that we’re creating an environment where we actually have less control rather than more?” Crawford asked.
Crawford provided some things to keep in mind to help enterprises ensure they aren’t moving too fast to secure their environments.
One concept Crawford suggested security managers keep in mind is the cyber defense matrix. ”We traditionally thought of security primarily from an architectural point of view. Building from the devices at the end point, the people who use them, the data that they handle, exchange across networks, those networks and how they’re integrated with the data center, and the applications that run that data center. We all have that thought about architecture,” Crawford said.
“Well, one of the dimensions we can add to that is process. Before we’re aware of any incidents or events, we want to identify what’s in the environment. We want to protect that environment — to take proactive steps to do so whenever we can before we have an actual threat that’s a foot in our organization. Once something does appear, and it always will, we want to be able to detect that quickly, with agility. We want to be able to respond, and we want to recover from these incidents and minimize the damage, minimize the impact on our organization,” he said.
First up, consider taking a look at where continuous innovation poses a security risk. One of the most prominent places to start is with development, Crawford said. “There are more tools today, for continuous integration, continuous deployment, taking software directly from the developer and putting it quickly into operations in ways that just weren’t possible years ago,” he said.
So one of the first things security teams must consider is how they can get more directly involved in the process without slowing down the business. “We can actually build security before it goes into operations. Whether that’s integrations with Jenkins. But even before that, get involved with source code itself,” he advised. That can include helping development teams build secure code or vetting open-source software components that are used. “Wouldn’t you want to be aware that you’re building your project on top of a known vulnerable version or release of a project before you put it into production?” Crawford asked.
There are similar opportunities to intervene with security processes into operations, Crawford explained. And it’s crucial it be done. There are many technology platforms and operational models enterprises can choose — traditional on-premises, virtualization, containerization, serverless. “And there’s the choice to use any number of services across that spectrum. So we have a really big array of choices in IT and that means we have a really big array of challenges to get a handle on visibility issues before all these options.” Crawford explained.
The key for security teams is that they understand how to operationalize the security in these complex environments and be literate enough to provide knowledgeable guidance to the organization on adopting and employing these technologies securely.
*** This is a Security Bloggers Network syndicated blog from Business Insights In Virtualization and Cloud Security authored by George V. Hulme. Read the original post at: http://feedproxy.google.com/~r/BusinessInsightsInVirtualizationAndCloudSecurity/~3/ArHufaaeoF8/security-teams-tame-today-cloud-complexity