Just a few years ago, most IT environments were made up of deployed servers on which personnel installed applications, oftentimes as many as that one system could handle. They then remained and ran that way for years. In the meantime, the IT team maintained the system and updated the applications as needed. Sometimes there were test versions of those systems, but this wasn’t often. Even then, the OS often didn’t match the production version of the same system. The environment was fairly static, not dynamic, and changes only happened when updates were released. A lot of IT departments also assumed a mindset of “if it ain’t broke, don’t fix it” school of thought, so updates were often ignored, anyway.

Things started to evolve a bit with the dual advent of Virtual Systems and worms/viruses/hacking when systems became image capsules and needed to be updated more often. Companies started to deploy images, generally put one application on each image, and rolled out many more system images on one piece of hardware. The number of systems proliferated even if the number of applications remained the same. These images were still updated and maintained, and things changed more often to keep up with security patches, but these elements combined still did not comprise what many would consider to be a truly dynamic environment.

DevOps Connect:DevSecOps @ RSAC 2022

Over the past five or so years, companies have again begun shifting how IT resources are deployed and managed. There are several new methods for application deployment happening. These include the following:

  • Automated Image creation and deployment
  • Immutable image deployment
  • Containers

Each of these has an impact on security and management of assets. (Read up separately on CI/CD Pipelines to see how these images are created.) We’ll discuss each of these methods in our first post. We’ll then look (Read more...)