What the Next Era of Cloud Computing Means for AppSec & the SDLC

Since the 1990s there have been three logical phases of cloud adoption, from pioneering to mass adoption and managing. Effectively, the success of each phase led to the next phase, and we are in the management phase today. However, it’s the problems that managing phase solutions haven’t been able to address that’s leading to a new phase: Rethinking.

Rethinking is the phase where we must tackle two more complex cloud management challenges:

  • Traditional pre-cloud era benefits for which simply adopting new protocols is not sufficent
  • New problems for which there is no pre-cloud equivalent

The Pioneers

In the 1990s, cloud-computing/SaaS was pioneered by Salesforce.com, Concur, Taleo, Eloqua and NetSuite. These companies established the model that multi-tennant software, in the public cloud, created tremendous engineering efficiencies. Because they committed to public cloud environments, SaaS companies controlled the hardware platforms their code ran on, which meant they could create homogenous production environments and their ability to release quality software quickly increased dramatically. These vendors no longer had to support the stacks their customers deployed.

In public cloud environments, new code could be pushed to the entire production environment with a high degree of confidence because the QA process was streamlined and more efficient. Faster iteration cycles meant they could react to customer demands more quickly, and deliver services that took huge marketshare from legacy on-prem competitors. The earliest adopters of the cloud were SMBs and mid-market organizations, primarily because their security requirements, customization needs and integration challenges were less than large enterprises, and thus had a lower bar to entry.

Mass Adoption

By the mid 2000s, public cloud adoption had expanded dramatically. No longer were Salesforce.com or Concur an organization’s lone cloud services. Successfactors, Jive, Box, Tableau, ServiceNow, Workday, Zendesk, Apptio, Twilio, Nutanix and many other cloud services emerged and began to take significant marketshare from legacy vendors. The expansion was both in depth and breadth. SMB and mid-market organizations started to adopt “cloud-first” approaches and large enterprises began to adopt their first public cloud services. Today, cloud applications have become so dominant that there are numerous services that provide very narrow functionality, for example, e-signatures, with whispers of DocuSign going public later this year.


As cloud applications and infrastructure evolved from the exception to the norm, the movement created a series of secondary issues. Legacy on-prem solutions could no longer deliver their primary benefits in the cloud era. Hence, the third wave of cloud adoption has been about cloud management. Okta, New Relic, Mulesoft, Docker, Mesosphere, CASBs and others were founded to either deliver traditional benefits from the on-prem era in cloud environments or solve new problems created by widespread cloud adoption. Since every thing is faster in the cloud, from developer releases to customer time-to-value, the more cloud was adopted, the bigger impact its secondary effects have had and the faster the cloud management services have grown.

The Era of Rethinking

The era of cloud computing that we are starting to enter now is essentially cloud management 2.0. The market naturally solved the secondary cloud management challenges based on a combination of the amount of pain they caused, how widespread the pain was and how easy the pain was to solve. For example, as organizations developed cloud applications, the need for New Relic or AppDynamics became acute and widespread. As organizations adopted cloud applications for their employee users, the need for Okta also became widespread. However, in both of these examples, delivering the old benefits in the new environment was relatively straightforward. It required new protocols and processes, but not necessarily rethinking the core problems from the ground up.

Today’s cloud innovation era is about solving problems that either are completely new or require rethinking from the ground up. For example, look at the rapid growth of Kubernetes. Docker’s first release was in 2013, so the concept of managing containers is net-new.

The cloud problems that require rethinking are in many ways the most stubborn. The appearance of solving these problems often creates a head fake. Naturally, the attempt to refactor the old way of solving a problem more efficient gains some traction first. For some the incremental benefit may be compelling. However, because these problems fundamentally require rethinking, the initial solutions are unsatisfactory. They may manage some symptoms, but they don’t cure.

Tackling Application Security

Application security (AppSec) in the cloud era is a primary example of where rethinking is required. Cloud development has dramatically changed the SDLC and it has had major security implications:

  • Faster releases mean less time for static analysis, dynamic analysis, configuration and managing alerts
  • Microservice architectures make understanding critical data flows significantly more complex

Current approaches to solving these issues are essentially incremental improvements on the old way of doing things. Runtime Application Self Protection (RASP) and Next Gen Web Applications Firewalls (WAF) do make application security a little more efficient. However, they were not designed for the cloud and thus have serious limitations.

RASPs can be tuned to be more efficient than old-school WAFs, which leads to fewer false positives in runtime. Similarly, pattern matching has improved, which can lead to slightly better data leakage detection. However, in both cases, the underlying assumptions are flawed because they are attempting to make the old approach approach more efficient when the new environment, which fundamentally requires new thinking. Are RASP or next gen WAFs leading to fewer security incidents? Are developers really more likely to pay attention to potential vulnerabilities now that the false positive rate has been lowered from 1/100 to 1/85? The unfortunate truth is that AppSec is actually falling further behind in the era of cloud computing.

In modern CI/CD you simply can’t make the assumption that a security or development team will have the time to do any of the following:

  • Perform comprehensive static/dynamic source code analysis and remediate the findings.
  • Understand how each microservice handles every piece of data, whether encrypting, decrypting, refactoring, passing to other microservices and 3rd party libraries (for example, Uber posting credentials to GitHub).
  • Comprehensively understand how each third-party library will expect to receive information (for example, the SF Muni deserialization attack).
  • Instantly protect against every new CVE for open source software components (for example, patch or drop traffic).
  • Do all of the above for every release of every microservice.

Given the above list of limitations, the assumptions on which the current breed of security tools have been built won’t work for applications in the cloud. Code analysis, RASP and WAF were built for a manual era. Now that other pieces of the SDLC are being automated, the approach of speeding up manual processes simply isn’t going to work. At best, these legacy approaches can only slow the rate at which AppSec falls further behind.

What we need for AppSec in the cloud era is to fully automate security.

How can we deliver runtime protection with every release for:

  • CVEs in third-party libraries.
  • Unknown vulnerabilities in proprietary software your organization produces (remote code execution, cross-site scripting, etc.) for which there are no CVEs.
  • Sensitive data leakage (the age-old challenge that is only growing in scope).

We know with reasonable certainty that SDLCs will keep getting faster. To keep up, security must become automated. Attempts to make manual security tools and processes faster will not work in the cloud era. But how do you automate comprehensive security that is also precise and performant?

The need for human input in traditional AppSec approaches is to provide context. The tool itself is generic and requires a human to deliver context through configuration at the start, sorting through false positives at the end, and often both. However, this need for human-delivered context is a self imposed limit.

Today, modern SDLCs seek continuous improvement. Dev informs prod and prod informs dev. Why should security be any different? If we take this approach and apply it to security, we can begin to rethink the problem.

AppSec for the Cloud Era

This is what we’ve done at ShiftLeft. Our goal isn’t to solve problems in dev or prod silos; it’s to create continuous AppSec.

We start by trying understanding how an application works: what the application is and is not supposed to do, how data flows into, out of and across microservices and which 3rd party libraries are being used. This is not easy to do (we use semantic graphing), but once accomplished, it enables automated policy creation. So we changed the goal. Instead of finding and fixing all vulnerabilities in one part of the SDLC, which requires lots of human effort, we are finding vulnerabilities in order to automatically create runtime security profiles, which doesn’t require human intervention.

In runtime, in addition to protecting the application, we develop analytics on actual usage. These runtime analytics can then be used to help prioritize which vulnerabilities to consider fixing in the next dev cycle. And, because we understand dev and prod, we can even deliver the exact lines of code associated with each prioritized vulnerability.

Now that we’ve created a highly precise list of vulnerabilities, based on production data, and made fixing them more efficient, with the relevant lines of code, its appropriate to bring in humans back into the equation. Developers then either quickly fix the confirmed vulnerabilities or rely of ShiftLeft to continue protecting areas of code weakness. Thus, we’ve created a fully automated continuous security cycle that can run as fast as any SDLC.

What Does it all Mean for the SDLC?

In this new era of solving the really hard challenges that remain in cloud computing, I think we’re going to see more an more AI and machine learning focused on understanding how code is supposed to work. Thats the key. When you can derive meaningful context w/o human intervention, you can confidently blur the boundaries between dev and prod. As the boundaries fall, we may be entering an era of not just continuous improvement, but instantaneous. Regardless of whether we get to instantaneous, or how long it takes, the ability to merge meaningful data from both sides of the SDLC will lead to many new efficiencies. Security, APM and how teams are managed come to mind, but really what aspects couldn’t be improved with this data?

What the Next Era of Cloud Computing Means for AppSec & the SDLC was originally published in ShiftLeft Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

This is a Security Bloggers Network syndicated blog post authored by Andrew Fife. Read the original post at: ShiftLeft Blog - Medium